The Self-Driving Uber Crash—What Does It Mean?
The tech community is at a critical juncture. A self-driving Uber struck a pedestrian in Tempe, Arizona, on Sunday. The victim later died from her injuries. It’s the first time an autonomous vehicle has killed a human.
Uber is cooperating with police and federal investigators and has stopped testing its self-driving vehicles in Phoenix, Pittsburgh, San Francisco, and Toronto. There was a human safety driver behind the wheel, though his or her role in the crash is unclear. Meanwhile, lawyers will have to wade into entirely new legal territory.
Most experts knew this day was coming. But now that it’s happened, the revolutionary technology could be at a crossroads.
“We just crossed an important moment in technological and, arguably, legal history,” said Peter Singer, a strategist and senior fellow at New America, a non-partisan think tank. While it’s not the first time that a robot has killed someone (that occurred in 1979 at a Ford Motor factory in Flat Rock, Michigan), Singer says the Uber incident is likely to spark a moral and philosophical debate. That is, do the advantages of self-driving cars outweigh their occasional failings?
The answer to this conundrum isn’t straightforward. “We don’t have a really good way of adjudicating what to do about it,” Singer said.
The circumstances of Sunday’s accident are still unclear, though the San Francisco Chronicle is reporting that the victim—a woman named Elaine Herzberg—abruptly stepped in front of the vehicle. There may have been little to no time for a human, let alone a self-driving car, to react. But that might not matter from the public’s perspective.
“Machine mistakes are different from our mistakes,” said Edmond Awad, a postdoctoral associate studying machine ethics at the MIT Media Lab. “A car would not be tired or anxious or distressed—but a car could have a problem sensing something. This alone could make people afraid.”
On the flip side, Awad and some of his colleagues just published a paper suggesting that when a self-driving car piloted by an autonomous system, but monitored by a human, is involved in tragedy like the one in Tempe, and both the software and the driver make errors in their shared control of the situation, people are more likely to blame the human as opposed to the machine, leading to less public outcry.
Missy Cummings, a professor at Duke University, has staunchly advocated for exercising caution when it comes to the deployment of self-driving cars. In 2016, she warned Congress about the risks of putting autonomous vehicles into practice before they’re fully vetted. She calls it the “space race” of the 21st century—a race that could affect more human’s lives with greater immediacy than the last century’s actual space race.
“There are still huge gaps in their ability to ‘see’ the world in the way it needs to be seen,” she said. “Until we get them to be consistently reliant, we’re going to have problems.” Tesla, for example, still has issues with their vehicles running into fire hydrants, she noted.
Those who argue for rapid adoption of self-driving cars argue that innovation has a “chill” effect if you start to slow it down, Singer said.
“You [have to] be realistic and understand that bad things can and will happen, and you have to have accountability for when it does happen,” Singer said.
Cummings recommends static and dynamic vision tests for driverless cars, in addition to limits on the types of roads they can navigate. “These cars should not be operating on public roads greater than 25 miles per hour,” she said. “These cars should be marked in particular ways, and communities need to be notified as broadly as possible, mostly because the technology is so experimental.”
Minimum safety standards aside, Singer said the bigger question will be how we respond to flaws in an evolving, complicated system.
“It’s like out of the book Frankenstein,” he said. “Do you blame the creator, the monster, or the villagers who shouldn’t have chased after the monster with torches?”
Comments
Post a Comment