“What if we can build a car that’s 10 times as safe, which means 3,500 people die on the roads each year. Would we accept that?” asks John Hanson, a spokesman for the Toyota Research Institute, which is developing the automaker’s self-driving technology.
“A lot of people say if, ‘I could save one life it would be worth it.’ But in a practical manner, though, we don’t think that would be acceptable,” Hanson added.
Members of Congress are beginning to consider legislation that enables the broader adoption of self-driving technology without compromising safety. At a House subcommittee hearing last week, for example, lawmakers and industry leaders alike grappled with the question of whether machines need only drive better than humans to win our trust.
More than 35,000 people were killed in car collisions in the United States in 2015, according to the National Highway Traffic Safety Administration. The agency estimates 94 percent of those wrecks were the result of human error and poor decision-making, including speeding and impaired driving.
Self-driving enthusiasts assert that the technology could make those deaths a misfortune of the past. But humans are not entirely rational when it comes to fear-based decision-making. It’s the reason people are afraid of shark attacks or plane crashes, when the odds of either event are exceptionally low.
Harvard University professor Calestous Juma draws a parallel between self-driving cars and home refrigerators, which gained popularity in U.S. households in the 1920s and 30s. Although scientists understood that cold storage could cut down on food-borne illnesses, reports of refrigeration equipment catching fire or leaking toxic gas made the public wary.
Americans eventually adopted the now-ubiquitous household appliance thanks in large part to the U.S. Department of Agriculture — which advocated for the health benefits of refrigeration and educated against unfounded concerns about the technology’s safety, Juma writes in his book, “Innovation and Its Enemies: Why People Resist New Technologies.”
People are also more inclined to forgive mistakes made by humans than machines, Gill Pratt, the chief executive of the Toyota Research Institute, told lawmakers on Capitol Hill last week.
“The artificial intelligence systems on which autonomous vehicle technology will depend are presently and unavoidably imperfect,” Pratt told lawmakers at a House subcommittee hearing. “So, the question is ‘how safe is safe enough’ for this technology to be deployed.”
As a society, we understand human limitations because we live with them daily, said Iyad Rahwan, an associate professor at the Massachusetts Institute of Technology Media Lab who has studied the social dilemmas presented by autonomous vehicles. While we may assign blame or seek retribution — by sending a drunk driver to prison, for example — the capacity for human failure is not hard to understand or empathize with. The same is not true for machines, he said.
“We penalize them and distrust them more when they make mistakes,” Rahwan said. “It comes down to us not having proper mental models of what machines can and cannot do.”
Researchers at the University of Pennsylvania have dubbed this “algorithm aversion.” In a 2014 study, participants were asked to observe a computer and a human make predictions about the future, such as how a student would perform based on past test scores. Researchers found that “people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake.”
The answer to questions about safety might come down to how much we trust self-driving cars, regardless of how many lives they can save, Rahwan said. For example, if autonomous vehicles save the lives of thousands of motorists but cause fatalities of cyclists and pedestrians to increase, the public’s trust in the technology is likely to erode.
“If they’re not comfortable with the trade-offs that cars are making, then we risk people losing faith in the system and perhaps not adopting the technology,” Rahwan said.