AUSTIN — Would you support the introduction of a machine that would kill 3,300 Americans a year? The answer is almost certainly no. But what if that technology was a driverless car, and if those 3,300 deaths replaced the roughly 33,000 lives a year that perish on U.S. roads as a result of human error? Is one death caused by a machine error better than 10 deaths caused by human error?
From a utilitarian perspective, it would seem that trading 33,000 deaths for 3,300 would make sense (The 3,300 figure is an arbitrary estimate I’m including for discussion purposes. In theory, self-driving cars will save many lives — exactly how many we don’t know.)
In a keynote address at the SXSW Interactive Festival on Sunday, author Malcolm Gladwell pressed venture capitalist Bill Gurley on our “catastrophically imperfect” network of cars. Gurley honed in on one of the big drawbacks of self-driving cars.
“Humans will be much less tolerant of a machine error causing death than human error causing death,” said Gurley, an early investor in Uber and other disruptive technologies. He describes himself as much more skeptical of driverless cars than most people.
“I would argue that for a machine to be out there that weighs three tons that’s moving around at that speed, it would need to have at least four nines because the errors would be catastrophic,” Gurley said. (Four nines alludes to 99.99 percent, as in the near-perfect safety record self-driving cars may need to gain acceptance. For example, a Web site with “four nines” fails to load only one minute per week.)
Driverless cars may need to be near perfect, but they’ll face a long list of rare circumstances that could be difficult to handle. These unusual circumstances are sometime called edge cases. For example, can a car be programmed to identify an ambulance siren and pull over? Can it respond to an officer directing traffic? What about inclement weather, heavy snow, flooded streets or roads covered with leaves? These things could all disrupt its sensors.
In an panel Saturday at SXSW, University of Michigan professor Ryan Eustice, who is developing algorithms for the maps driverless cars will rely on, acknowledged the challenge.
“To really field this technology in all weather, all kinds of scenarios, I think the public’s been a little oversold to this point,” Eustice said. “There’s still a lot of really hard problems to work on.”
He cited the problem of a driverless car’s sensors being confused by snowflakes during a snowstorm. There’s also the question of whether a driverless car in a snowstorm should drive in its original lane or follow the tracks of the car in front of it?
You might think we can just rely on humans to take over whenever a situation gets dicey. But Eustice and others aren’t fond of that.
“This notion, fall back to a human, in part it’s kind of a fallacy,” Eustice said. “To fall back on a human the car has to be able to have enough predictive capability to know that 30 seconds from now, or whatever, it’s in a situation it can’t handle. The human, they’re not going to pay attention in the car. You’re going to be on your cell phone, you’re going to totally tune out, whatever. To take on that cognitive load, you can’t just kick out and say oh ‘take over.’ ”
He noted how Google had taken the steering wheel and pedal out of its driverless car prototype to avoid the human-machine interface issue, which he considers a huge problem for the field.
In his talk with Gurley, Gladwell noted the surprising disparity between Americans killed in wars and on U.S. roads. It would seem we could do a lot better. But a lot of tough challenges must be solved before U.S. roads can ever become some sort of self-driving utopia.