After Tempe fatality, self-driving car developers must engage with public now or risk rejection

An autonomous vehicle struck and killed a pedestrian on March 18. ABC-15.com via AP

After Tempe fatality, self-driving car developers must engage with public now or risk rejection

On Sunday evening, March 18, an Uber SUV hit and killed a pedestrian in the Arizona city of Tempe. In a place where vehicle-related pedestrian fatalities are unfortunately a regular occurrence, this shouldn’t have stood out as particularly unusual. But what made the death of 49-year-old Elaine Herzberg different was that the vehicle that killed her was driving itself.

As faculty on Arizona State University’s Tempe campus who also study technology innovation, we’ve become used to seeing self-driving cars operated by Uber, Waymo and others on our daily commutes. We know that our neighbors and students are largely excited that our streets are being used to test self-driving technologies. And we love pointing out the cars and SUVs topped with spinning sensors to colleagues from out of town. But we also know that the people who live and work here have local knowledge and values that are being ignored by those who are designing and testing these new technologies.

In 2015, Arizona famously opened its doors to such vehicles by encouraging companies to test out their self-driving cars on public roads. And apart from a relatively minor crash in 2017, there have been few serious incidents. Yet despite this record, the lack of coordination, collaboration and transparency between industry, city government and the public – even on issues as basic as road safety – has created an environment where the future success and safe use of self-driving cars is far from certain.

A 2017 autonomous vehicle crash in Tempe didn’t kill anyone. Tempe Police Department via AP

Bearing the risks, without a voice

No matter what, 4,000 pounds of steel and plastic hurtling along at speed is dangerous, whether a computer or a person is in control. Sunday’s incident forces society to think more critically about the risks we’re willing to take as self-driving cars are tested on our roads, and who gets to make these decisions.

Sadly, Elaine Herzberg’s death occurred just weeks after Arizona Governor Doug Ducey signed an executive order requiring self-driving cars in the state to meet specific safety standards. The provisions in this executive order have yet to go into effect. But even if they had been fully implemented, it’s not clear whether they would have prevented this collision. It’s even less clear that they would help local communities trust, and see the benefits of, the self-driving cars on their streets.

For this, we desperately need a broader conversation about self-driving vehicles and how to develop and use them responsibly. And this means product developers and policymakers must actually talk with people who are potentially affected by the technology.

Residents don’t have the technical expertise of the autonomous vehicle developers. But they likely do have insights that would substantially enhance the safety and trustworthiness of the vehicles being tested. For instance, local communities might have suggested that testing an unproven technology in a school zone when children are present might not be the best idea. Or that maybe self-driving cars shouldn’t be experimented with late at night in poorly lit places, where it’s hard to see and predict how pedestrians might behave. If companies like Uber had met with Elaine and others like her, maybe things would have turned out differently.

Innovating experts tend to forget other views

Unfortunately, this level of public engagement is often overlooked in technology innovation. It doesn’t help that the American public is frequently seen by innovators and developers as so uninformed or biased as to be of little use in conversations about emerging technologies. But in our experience, listening to the values, concerns and aspirations of the general public can inject new and valuable ideas into the ways in which research programs are run.

From our own experiences with other emerging technologies – ranging from nanotechnology and synthetic biology to artificial intelligence, smart cities and autonomous vehicles – we know there are some basic guidelines that can help support successful, safe and beneficial innovation. These include partnering with experts who know something about socially responsible innovation, engaging with and listening to communities who are potentially affected by the technology and paying attention to what people actually want – as well as what they do not.

Ask Dr. Frankenstein about the hazards of innovating in isolation. Paolo, CC BY-NC

Of course, implementing these rules of thumb is far from simple. But hard experience shows that not engaging with stakeholders can have disastrous consequences. Nuclear power, genetically modified foods and many other innovations all struggled to reach their potential in part because leaders in the field didn’t think to consult with outsiders as part of the innovation process. Many “experts” don’t automatically think about what they can gain from partnering up with “regular” people.

These “nonexperts” may not be able to contribute directly to the technology. (Although even here, citizen science suggests that the capabilities of regular people get underestimated.) But they can offer unique insights to help ensure technologies align with what they value. And ultimately, these are the people who will decide whether a technology succeeds, or not.

These are challenges we face on a daily basis in our own work, whether it’s in developing smart cities, engaging with various stakeholders around emerging issues or exploring how to ensure new technologies are safe, beneficial and responsible. Through experience, as well as specific initiatives like Expert and Citizen Engagement of Science and Technology (ECAST) – a network of academics, informal science educators and nonpartisan think tanks we collaborate with – we’ve learned that if innovators, scientists and policymakers engage early and often with people who understand responsible innovation, they can avoid unpleasant surprises down the pike.

Engaging constructively with members of the public early on can help develop technologies and regulations that are better aligned with what people want and are willing to support. For instance, in the Nanotechnology Enabled Water Treatment program – a collaboration between ASU, Rice University, the University of Texas El Paso and Yale University – we are working with both manufacturers and consumers to ensure that domestic nanotechnology-enabled water filters are effective, safe and accepted by users. Our commercial partners know that if there is no public engagement, they risk losing public trust – however good the technology.

Unfortunately, this type of collaboration isn’t currently happening as much as is needed in Arizona with self-driving cars. Despite working in Tempe and being intimately involved in responsible innovation and public engagement, we’ve seen precious few attempts by companies like Uber and Waymo to talk with and listen to local communities. When everything’s going smoothly, not engaging doesn’t seem like a big deal. But when problems do arise, this lack of engagement could severely weaken the foundations on which self-driving technologies will be built.

Whether the fatal collision in Tempe is enough to shake these foundations remains to be seen. But it is a wake-up call to developers and policymakers that the honeymoon period with self-driving cars may be coming to an end. And as it does, there’ll be a greater need than ever to engage and partner with people who can help ensure the self-driving future the public wants.

We need your help to elevate the voices of experts, not the shouters.