Robots will bring tremendous new services to our lives, but we can’t count on them to be perfect. (Bernd Thissen/EPA)

Ready or not, the robots are coming. There will be cars driving themselves, with no steering wheel for us to grab. Delivery drones will maneuver around patio furniture and vegetable gardens to drop packages in backyards. Delivery carts on sidewalks will hum along, bringing pizza boxes to our front doors.

These systems will operate without any humans directly keeping an eye on them. This raises a huge question that everyone from government regulators to tech companies and safety advocates are wrestling with. If robots are driving us around, delivering our meals, baby-sitting the elderly, replacing doctors in operating rooms and fighting in wars, can we trust them to behave safely?

“This is the most essential question of the whole matter,” said Andre Platzer, who researches car, aircraft and robotic safety at Carnegie Mellon University. “It’s a very difficult question, how you can know for sure that the system itself is actually really safe.”

The emerging presence of robotics is happening in a wide range of disparate fields, so these innovations are largely falling into a regulatory black hole, challenging the way our society is set up to evaluate safety.

Google has been frustrated with the pace of regulation surrounding self-driving cars in California. (Google) Google has been frustrated with the pace of regulation surrounding self-driving cars in California. (Google)

For example, should driverless cars be regulated by state DMVs or the U.S. Department of Transportation — both of which lack deep knowledge of robotics? What about military robots — should the United Nations develop rules or should each government go its own way? Some are calling for a new government agency in the United States — kind of like a NASA for robots — because traditional agencies lack expertise in the rapidly emerging field.

“The government itself is not acting as a repository of expertise here,” said Ryan Calo, a law professor at the University of Washington and expert on technology policy. “I worry quite a bit that government will over-rely on experts from industry because they don’t have their own internal knowledge.”

[California’s DMV puts the brakes on self-driving cars, for now]

Some states have already started testing autonomous cars on their roads. They see the potential safety benefits these vehicles can bring, and want to attract jobs and businesses. But while officials in these places stress safety as a priority, none have explained in detail how to determine that a robot is safe enough.

California, home to the country’s thriving tech sector, is ahead of most states in addressing these issues. On Dec. 16, the California DMV released draft guidelines for autonomous cars but refrained from addressing fully autonomous vehicles, which don’t require a driver, steering wheel or pedals. It says it will tackle self-driving cars down the road. The incomplete rules came more than 11 months after its deadline.

The department is facing an extremely difficult task. It has no history of dealing with the complex algorithms that govern self-driving vehicles. Its experience has been with human drivers, not robots.

[California DMV official speaks candidly about the headache of regulating self-driving cars]

The Robear is strong enough to lift a patient from their bed or wheelchair. (Jiji Press/AFP/Getty Images) The Robear is strong enough to lift a patient from their bed or wheelchair. (Jiji Press/AFP/Getty Images)

The DMV’s cautious pace has drawn some criticism. Google said Wednesday that it was “gravely disappointed,” and that the aim of its program is to improve safety on roads. Google’s cars are programmed to take a conservative approach if they see an object or situation they can’t categorize. If something crazy is happening ahead, the car will likely stop. On Dec. 17,  California Lt. Gov. Gavin Newsom warned that the rules might be too onerous and block innovation.

John Simpson, director of the Consumer Watchdog Privacy Project, feels the DMV has acted admirably, given safety concerns over robotics systems.

Meanwhile, the U.S. Department of Transportation, which declined to comment on California’s rules, has focused most of its efforts on a narrow slice of robotic safety. It is addressing communication signals between autonomous vehicles, but not the broader question of determining if these robot cars will be safe.

Calo, the Washington professor, envisions a government group that specializes in robotics and advises agencies around the country that are struggling with related issues. He describes it as a “NASA for everything,” given the space agency’s track record of attracting exceptionally bright minds.

If such a group existed, what might its tests look like?

Ryan Eustice, who researches self-driving vehicles at the University of Michigan, suggested a test to license such vehicles could rely on a combination of hardware testing, laps on test tracks, tests in a virtual simulator and old-fashioned mileage driven on public roads.

“It’s not just logging and getting miles under your belt,” Eustice said. “Were those hard, difficult miles.”

The university set up a fake town for testing vehicles to concentrate the most difficult situations an autonomous vehicle would encounter. But such a system wouldn’t be perfect.

One can’t predict every circumstance a robot will be in, and test to see that it will handle them appropriately. There are so many rare situations, what experts call edge or corner cases.

“It’s the question we’re all struggling with right now,” Eustice said. “It’s really hard to get test coverage on some of the functionality that we’re talking about.”

Mike Wagner, a Carnegie Mellon robotics researcher, tests the software behind autonomous vehicles. While a believer in the overall improvements autonomous technology will bring, he cautioned that such systems will be imperfect, like the humans who build them.

“Developers are human beings that are being asked to build something extraordinary that’s never been built before,” Wagner said. “Everyone’s getting hacked. Even the smart guys get hacked. The smart guys are the ones that are like, ‘We’re going to get hacked, what should we do about it when it happens?’ ”