Call it a close encounter of the third gear.
For the past six years, tech companies — led by Google — have been testing out self-driving cars. As the robotic prototypes have moved from private tracks to public roads, the projects have raised hopes of safer transportation. After all, it’s human error that causes nearly all driving accidents.
But a near collision between two self-driving cars is now raising concerns over the technology, Reuters news service reported. On Tuesday, Reuters said, two driverless prototypes, one operated by Google and the other by Delphi Automotive, nearly collided in Palo Alto, California.
But Google subsequently denied the report, and Delphi sent a statement saying the story was taken out of context.
John Absmeier, director of Delphi’s Silicon Valley lab, was a passenger in his company’s self-driving Audi Q5 as it drove along San Antonio Road when it was suddenly cut off by a Google-operated Lexus SUV, Absmeier initially told Reuters.
But Delphi subsequently sent out this statement:
“The story was taken completely out of context when describing a type of complex driving scenario that can occur in the real world. Our expert provided an example of a lane change scenario that our car recently experienced which, coincidentally, was with one of the Google cars also on the road at that time. It wasn’t a ‘near miss’ as described in the Reuters story.”
The pair of fully automated autos did not collide.
Both cars were equipped with similar technology, including lasers, radar, cameras and computer systems enabling the cars to drive on their own without need of human drivers. Both cars did, however, have people behind the wheel in case of an emergency.
The incident is believed to be the first of its kind, according to Reuters. It came on the same day that Google announced its latest model of self-driving car was already hitting the streets of Silicon Valley.
The new Google cars look like mini mini Coopers: tiny, pod-like two-seaters that have been approved to drive up to 25 miles per hour on the roads around the company’s Mountain View headquarters. Previously, the cars were confined to a private track on a former Air Force base, according to the Associated Press.
Like the Lexus allegedly involved in the near collision on Tuesday, the new Google cars still have steering wheels, brakes and human drivers just in case.
It’s just a matter of time, however, until Google or Delphi are approved to go completely (human) hands free.
Many people find that terrifying.
According to a new study by the European Commission, “six out of ten respondents (61%) say that they would feel uncomfortable travelling in an autonomous or driverless car. Slightly more than a third (35%) would feel comfortable or fairly comfortable.”
The study shows that humans have broader concerns about the rapidly advancing field of robotics.
Despite the obvious benefits of driverless cars and airplanes — namely safety and reduced emissions — “people have understandable concerns about the rapid pace of technological change, and about the role which robots could play in our future society,” the study found.
Thirty-six percent of those polled were uncomfortable with the idea of using a robot in school as a means for education, compared to 41 percent in favor. A majority (51 percent) of people expressed discomfort with robots providing services and companionship to elderly or infirm people. And a full 55 percent of respondents felt uncomfortable having a robot perform a medical operation on them.
Self-driving cars have become something of a bellwether for human attitudes towards robot technology.
Perhaps it’s because sci-fi movies have long promised robotic (and often hovering) cars. Or perhaps it’s simply because self-driving cars have the potential to upend everyday life in the very near future.
Bill Gates recently said self-driving cars were “the real Rubicon” in technology, and that companies like Uber were primed to take the lead.
Others have speculated that self-driving cars could completely kill the $200 billion car insurance industry.
Elon Musk, who has said his Tesla trucks could be ready to ditch their drivers as soon as this summer, recently mused that driving one’s own car will eventually be illegal.
“The leading automakers are pouring billions into developing autonomous vehicles that use sensors, cameras and high-speed computing power to read and react to traffic, pedestrians, stoplights and infrastructure,” Bloomberg reported earlier this week. “Luxury lines lead: BMW is rolling out a car that can park itself, Cadillac has a model coming that drives hands-free on the highway, while Mercedes and Audi already offer models that can pilot through a traffic jam while only asking its human minder to touch the steering wheel occasionally. Tech giants Google and Apple are also in the race. Boston Consulting Group estimates that robot cars may account for a quarter of global auto sales by 2035. Fully autonomous vehicles may be navigating cities in five to 10 years.”
Humans, being human, however, have freaked out.
When Google revealed last month that its self-driving cars had been involved in about a dozen fender-benders, people were up in arms. But the tech company explained that it was actually human error — usually people rear-ending the Google cars — that caused all the accidents.
Deeper ethical quandaries have popped up, however, like deer in our collective headlights. Chief among them is the concern that your self-driving car could ultimately decide to sacrifice your life in the name of utilitarianism.
“Google’s cars can already handle real-world hazards, such as cars’ suddenly swerving in front of them. But in some situations, a crash is unavoidable,” wrote Matt Windsor in ScienceDaily. “How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?”
While we worry about the philosophical implications of robot cars, however, human drivers keep on killing one another.
Last year, an estimated 32,675 people died in motor vehicle traffic crashes in the U.S., according to the National Highway Traffic Safety Administration.
No robots were behind the wheel.