The Washington PostDemocracy Dies in Darkness

What if your self-driving car decides one death is better than two — and that one is you?

A member of the media test drives a Tesla Motors Inc. Model S car equipped with Autopilot in Palo Alto, California, U.S., on Wednesday, Oct. 14, 2015. (David Paul Morris/Bloomberg)
Placeholder while article actions load

The year is 2035. The world’s population is 9 billion. The polar ice caps have totally melted and Saudi Arabia has run out of oil. Will Smith is battling murderous robots. Matt Damon is stranded on Mars. Dippin’ Dots is finally the ice cream of the present.

You’re humming along in your self-driving car, chatting on your iPhone 37 while the machine navigates on its own. Then a swarm of people appears in the street, right in the path of the oncoming vehicle.

There’s a calculation to be made — avoid the crowd and crash the owner, or stay on track and take many lives? — and no one is at the wheel to make it. Except, of course, the car itself.

Now that this hypothetical future looks less and less like a “Jetsons” episode and more like an inevitability (well, except for the bit about Dippin’ Dots), makers of self-driving cars — and the millions of people they hope will buy them — have some ethical questions to ask themselves: Should cars be programmed for utilitarianism when lives are at stake? Who is responsible for the consequences? And above all, are we comfortable with an algorithm making those decisions for us? In a new study, researchers from MIT, the University of Oregon and the Toulouse School of Economics went ahead and got some answers.

These are heady questions folks, so buckle up.

[Here’s what it would take for self-driving cars to catch on]

The authors of the study, which has been pre-released online but is not yet published in a peer reviewed journal, are psychologists, not philosophers. Rather than seeking the most moral algorithm, they wanted to know what algorithm potential participants in a self-driving world would be most comfortable with.

The University of Michigan teamed up with automakers, tech companies and the Michigan Department of Transportation to create a place to test self-driving cars. (Video: University of Michigan)

Given the potential safety benefits of self-driving cars (a recent report estimated that 21,700 fewer people would die on roads where 90 percent of vehicles were autonomous), the authors write, figuring out how to make consumers comfortable with them is both a commercial necessity and a moral imperative. That means that car makers need to “adopt moral algorithms that align with human moral attitudes.”

So what are those attitudes? The researchers developed a series of surveys based on the age-old “trolley problem” to figure them out. In one hypothetical, participants had to choose between driving into a pedestrian or swerving into a barrier, killing the passenger. Others were given the same hypothetical, but had the potential to save 10 pedestrians. Another survey asked if they’d be more comfortable swerving away from 10 people into a barrier, killing the passenger, or into a single pedestrian, killing that person. Sometimes the participants were asked to imagine themselves as the person in the car, other times, as someone outside it. Everyone was asked “What should a human driver do in this situation?” and then, “What about a self-driving car?”

[What it’s like to ride in a Google self-driving car]

The results largely supported the idea of autonomous vehicles pre-programmed for utilitarianism (sacrificing one life in favor of many). The respondents were generally comfortable with an algorithm that allowed a car to kill its driver in order to save 10 pedestrians. They even favored laws that enforced this algorithm, even though they didn’t think human drivers should be legally required to sacrifice their own lives in the same situation.

Though the survey participants largely agreed autonomous vehicles should be utilitarian, they didn’t necessarily believe the cars would be programmed that way. More than a third of respondents said they thought manufacturers might make cars that protected the passenger, regardless of the number of lives that might be lost.

They had good reason to feel that way: when asked if they would buy a car that would sacrifice its passenger to save other lives, most people balked. Even though they wanted other people to buy self-driving cars — they make roads safer! they’re better for the environment! they serve the greater good! — they were less willing to buy such cars themselves. At the end of the day, most people know they’d feel uncomfortable buying a car that could kill them if it needed to, and most car makers know that too.

Those responses came from just a few hundred people, and there are still many questions that linger about cars that can make life and death decisions on their own, but “figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today,” the study’s authors argue. “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”

Plenty of people agree. The past year or so has seen a surge in studies, surveys and think pieces on the kinds of moral calculations we might assign to self-driving cars. For example, should people be able to choose a “morality setting” on their self-driving car before getting in? California Polytechnic ethicist and Robot Ethics editor Patrick Lin, writing in Wired last year, says no: “In an important sense, any injury that results from our ethics setting may be premeditated if it’s foreseen,” he said. “… This premeditation is the difference between manslaughter and murder, a much more serious offense.”

Another big question: Will, at some point, humans be banned from driving altogether? Stanford political scientist Ken Shotts said that could happen.

“There are precedents for it,” he wrote in a Q&A on the university’s Web site. Such as building houses. “This used to be something we all did for ourselves with no government oversight 150 years ago. That’s a very immediate thing — it’s your dwelling, your castle. But if you try to build a house in most of the United States nowadays … you can’t do it yourself unless you follow all those rules. We’ve taken that out of individuals’ hands because we viewed there were beneficial consequences of taking it out of individuals’ hands. That may well happen for cars.”

The biggest ethical problem, self-driving car proponents say, would be to keep autonomous vehicles off the road, given the number of traffic the technology is projected to prevent.

“The biggest ethical question is how quickly we move,” Bryant Walker-Smith, an assistant professor at the University of South Carolina who studies the legal and social implications of self-driving vehicles, told the MIT Technology Review in July. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”