Consumers might be hesitant to take a ride in a self-driving vehicle if there’s a chance the software powering the car is programmed to put them at risk to save someone else. This has raised a lot of questions regarding the ethics of machines.
Chris Urmson, who heads up Google’s self-driving car project, weighed in on the subject Tuesday at Volpe, National Transportation Systems Center in Cambridge, Mass.
“It’s a fun problem for philosophers to think about, but in real time, humans don’t do that,” Urmson said. “There’s some kind of reaction that happens. It may be the one that they look back on and say I was proud of, or it may just be what happened in the moment.”
Urmson stressed that Google’s cars don’t know what person might be walking on a sidewalk or ambling in a crosswalk. The car won’t be able to decide which pedestrian makes the most sense to strike in the event of an unavoidable collision.
“It’s not possible to make a moral judgement of the worth of one individual person verse another — convict versus nun,” he said. “When we think about the problem, we try to cast it in a frame that we can actually do something with.”
Urmson added that the system is engineered to work hardest to avoid vulnerable road users (think pedestrians and cyclists), then other vehicles on the road, and lastly avoid things that don’t move.
On Tuesday, Google also released its monthly report on its self-driving car project. In November, Google began testing five more prototypes on public roads, for a total of 53 vehicles. Miles driven in autonomous mode — 75.43 percent — increased to an all-time high since Google begin sharing the figure in May.
The monthly report also provides an update on any crashes Google’s vehicles have been in. One of Google’s test vehicles was rear-ended at low speeds Nov. 2 while making a right turn in Mountain View, Calif. No one was injured. The test vehicles now have been in 17 crashes and driven more than 2.2 million miles, 58 percent of which have been in autonomous mode.