Philosophers have been gnawing on the infamous Trolley Problem for decades, and it’s always been a purely intellectual exercise with no “right” answer. But we’re suddenly in a world in which autonomous machines, including self-driving cars, have to be programmed to deal with Trolley Problem-like emergencies in which lives hang in the balance. There’s no dodging the issue: The programmers have to decide how machines can behave appropriately in crunch time (as it were).

In a simple formulation of the Trolley Problem, we imagine a trolley hurtling toward a cluster of five people who are standing on the track and facing certain death. By throwing a switch, an observer can divert the trolley to a different track where one person is standing, currently out of harm’s way but certain to die because of the observer’s actions.


(Sam Granados)

Should the observer throw the switch — cutting the death toll from five to one? That is the “utilitarian” argument, which many people find persuasive. The obvious problem is, it puts the observer in the position of playing God — deciding who lives and who dies.

The issue becomes more complicated when the Trolley Problem’s hypothetical narrative is tweaked to make the actions of the observer more aggressive. Imagine being in a position to push a person of enormous girth onto the tracks to stop the trolley, again saving five lives but putting an unhappy end to the large person’s life. Most people would say that’s obviously murderous. (But as our smart friend Robert Wright puts it, “[I]f you say yes the first time and no the second (as many people do), what’s your rationale? Isn’t it a one-for-five swap either way?” Discuss!)


(Sam Granados)

Self-driving cars at some point will have to wrestle with situations akin to this, if perhaps not quite so melodramatic. They’ll have to swerve to avoid pedestrians or cyclists – but what if that imperils others? Such as the occupant/owner of the self-driving vehicle? Would you program a car to drive off the side of a mountain road, sacrificing the occupant, if a school bus was careening down the mountain in the wrong lane?

Computer programmers can’t just shrug their shoulders. They have to decide how to program the vehicle. And how do you write an algorithm for all these different kinds of situations?

“As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent,” the authors of a recent study wrote.

[What if your self-driving car decides that one death is better than two – and that one is you?]

As if this wasn’t challenging enough, self-driving cars also need to be programmed to be flexible in their maneuverings to take into account the eccentricities of human drivers. As Bloomberg Business reported, self-driving cars have often been slammed by cars with humans at the wheel. “The glitch? They obey the law all the time, as in, without exception. This may sound like the right way to program a robot to drive a car, but good luck trying to merge onto a chaotic, jam-packed highway with traffic flying along well above the speed limit. It tends not to work out well.”


(Sam Granados)

One hypothetical solution is to create a car that never has to make a Trolley Problem decision in the first place. So says Daniela Rus, head of the Artificial Intelligence lab at M.I.T. We had an exchange with her by email recently:

Q: How would someone program a car to handle something like the famous Trolley Problem?

Rus: “If we have capable perception and planning systems, perhaps aided by sensors that can detect non-line-of-site obstacles, the car should have enough situational awareness and good control. A self-driving car should be able to not hit anybody — avoid the trolley problem altogether!”

We talked to Rus as part of our two-part series The Resistance, which deals with the future of technology and the possible perils of artificial intelligence. In our story on artificial intelligence, Rus says that a self-driving car right now would have a hard time getting through Dupont Circle. She told us:

“There’s too much going on. We don’t have the right sensors and algorithms to characterize very quickly what happens in a congested area, and to compute how to react.”


(Sam Granados)

In an email, she elaborated:

“Driving in congested areas remains a big challenge for self-driving cars, along with driving in inclement weather (such as snow and rain), driving in congested areas at high speed, making a left turn in congested traffic, understanding human gestures (from road workers or other drivers).”

Having said all this, let’s point out that, in the long run, self-driving cars are likely to save lives. There may be situations in which the car doesn’t know exactly how to respond to a traffic scenario, but that’s true of human drivers, too. It’s not like we always make the perfect decision when we’re behind the wheel. Here’s a final word from our own Matt McFarland, who has written often of these matters:

Humans are freaking out about the trolley program because we’re terrified of the idea of machines killing us. But if we were totally rational, we’d realize 1 in 1 million people getting killed by a machine beats 1 in 100,000 getting killed by a human. For some reason, we’re more okay with the drunk driver or texting while driving. In other words, these cars may be much safer, but many people won’t care because death by machine is really scary to us given our nature.

Read more:

Google’s chief of self-driving cars downplays ‘the trolley problem’

Google patent reveals how its self-driving cars may communicate with pedestrians