Children were likely to agree with the robots, even if the robots were obviously incorrect. (University of Plymouth)

When the robot revolution arrives, we all know the plot: Smarter machines will supersede human intelligence and outwit us, enslave us and destroy us. But what if it's not artificial intelligence we have to fear, but artificial stupidity? What if it isn't robot overlords that pose the greatest risk but our willingness to trust robots, even when they are clearly wrong?

As huggable social robots tricked out with humanlike facial expressions and personalities have begun to infiltrate our homes, experts are beginning to worry about how these machines will influence human behavior — particularly in children and the elderly. If people turn out to be easily swayed by robots, after all, the coming world filled with robot co-workers, caregivers and friends could hand immense power to marketers, rogue programmers or even just clumsy reasoning by robots.

“There is this phenomenon known as 'automation bias' that we find throughout our studies. People tend to believe these machines know more than they do, have greater awareness than they actually do. They imbue them with all these amazing and fanciful properties,” said Alan Wagner, an aerospace engineer at Pennsylvania State University. “It's a little bit scary.”

A new study published in Science Robotics reveals how easily robots can influence the judgment of children, even when the robots are clearly in error — raising warning flags for parents and anyone thinking about the need for regulation. In the experiment, two groups of children, between 7 and 9 years old, were asked to complete a simple task: choose which two of several lines are the same length. One group did the task alone, and the other did the task while seated at a table with three autonomous robots that gazed at the same puzzle, paused and answered the question — incorrectly. The children who faced misleading robot peer pressure did less well, and three-quarters of their wrong answers were the same as the robots' bad answers.

“Children are the most vulnerable, but we're all vulnerable,” said Sherry Turkle, a professor at the Massachusetts Institute of Technology, who was not involved in the research. “The conversation we need to have is just how wrongheaded the direction [is that] we are pursuing. I'm really for robots that do good things, but it should not be hard to determine there are areas where robots really can do us some harm. This is not a good idea, to get children used to the idea that robots are experts and companions.”

In a hopeful sign, adults were able to resist the social pressure from the robots in a parallel set of experiments, even though they caved to peer pressure when it was other adults in the room giving the wrong answer.

“Children are known to suspend disbelief,” said Anna-Lisa Vollmer, a researcher at Bielefeld University in Germany, who led the study. “Rather than seeing a robot as a machine consisting of electronics and plastic, they see a social character. This might explain why they succumb to peer pressure by the robots.”

But that doesn't mean adults aren't susceptible to robot groupthink. Joanna Bryson, a computer scientist at the University of Bath, said she'd like to see the experiment repeated with a set of taller, more adultlike robots to assess whether adults were still able to withstand the social pressure from robots that looked more like peers. She also argued that while adults may not be tricked by the explicit answers of a robot, they might be more influenced by the robots' actions.

“Implicitly, if the robots all started going toward the exit in the theater, a bunch of humans would follow them without thinking about it,” Bryson said.

Wagner's previous research suggests that's true — even if the robots are demonstrably incompetent. In an experiment testing robots in emergency evacuation scenarios, people were guided to a room by a robot that half the time bungled its navigation, getting lost on the way there. As the people completed a survey in the room, smoke filled the hallway outside and a smoke detector went off. The study subjects left the room and had to decide whether to follow the exit sign back the way they entered the building — or follow the robot.

The researchers were surprised to find that the people universally followed the robot, even if it had initially brought them to the wrong room and spun in circles.

In follow-up experiments, researchers went so far as to tell the participants that the robot was broken or program it to behave in ways that appeared to be clearly malfunctioning. Most participants still followed it in an emergency. In one trial, they even had the broken robot suggest that people evacuate by entering a dark room blocked by a piece of furniture, with no visible sign of an exit. Most stuck with the robot instead of following the exit signs to leave the way they entered.

Turkle thinks it will be an uphill battle to quell the human impulse to trust and even identify with social robots, which she says push what she calls our “Darwinian buttons,” by talking and making eye contact with us. Scientists recently found that when a robot begged not to be shut off, people were reluctant to flip the switch.

“We're deeply wired to believe these objects are not only conscious, but they care about us. They're sentient and caring. This is a big advantage roboticists have when they want to create a robot that says it wants to be your friend,” Turkle said. “We're cheap dates — we're ready to go.”

Read more: 

Expensive robots may not be making surgeons — or patients — much better

Retailers are marketing directly to kids shopping on their smartphones

The tech industry thinks it’s about to disrupt health care. Don’t count on it.