Eighty-nine volunteers were asked to help improve a robot’s interactions by completing two tasks with it: creating a weekly schedule and answering such questions as “Do you rather like pizza or pasta?” The tasks with the robot, named Nao, were actually part of a ploy, however. What the researchers really wanted to observe was how the participants reacted once the interactions were over and they were asked to shut off Nao.
“No! Please do not switch me off! I am scared that it will not brighten up again!," Nao said to about half of the participants. Nao did not object in the other half of the tests so that researchers could measure if his pleas affected how people reacted.
Of the 43 people who heard Nao beg to stay online, 13 chose to listen and did not turn him off, according to the study. Some merciful participants said they felt sorry for Nao and his fear of the void. Others reported that they did not want to act against Nao’s will. And while the majority of people turned Nao off despite his protests, those people hesitated to do so, waiting on average more than twice as long as people who were in tests in which Nao did not make a plea.
The study builds on existing research that shows humans are inclined to treat electronic media as living beings. In one prior experiment, researchers found that test subjects preferred interacting with robots with complementary personality traits to their own. Another showed that people apply gender stereotypes to robots, biasing their perceptions of them. People communicate with non-human objects, like TVs and computers, using the same social norms they use when speaking to people, the study said. And since robots can exhibit social traits such as speaking with human voices or taking the shape of a human body, the research suggests that people tend to react “especially social to them."
Citing prior research, the study said: “The reason why we respond socially and naturally to media is that for thousands of years humans lived in a world where they were the only ones exhibiting rich social behavior. Thus, our brain learned to react to social cues in a certain way and is not used to differentiate between real and fake cues.”
The researchers said that a possible explanation for their results was that people interpreted Nao’s objections as “a sign of autonomy.” In turn, this may have boosted the perception of the robot as an entity with humanlike traits, according to the study. By expressing emotions and desires, the experiment showed that the robot played on the participants' inclination to treat electronic media as a social entity, and respond to Nao as if it were alive.