Human Responses to Technology Scrutinized
Emotional Interactions Draw Interest of Psychologists and Marketers
By Shankar Vedantam
Washington Post Staff Writer
Monday, June 7, 2004; Page A14
Not long ago, a British poll found that three quarters of people have hit their computers in frustration.
A German carmaker recalled an automobile with a computerized female voice issuing navigation information -- because many men refused to take directions from "a woman."
A study found that people try to be nice to their own computers: They are more likely to report problems with the machine when asked about it while working on a different computer.
Psychologists, marketers and computer scientists are coming to realize that people respond to technology in intensely emotional ways. At a conscious level, people know their computers and cars are inanimate, but some part of the human brain seems to respond to machines as if they were human.
"The way people interact with technology is the way they interact with each other," said Rosalind Picard, director of Affective Computing Research at the MIT Media Lab, during a recent lecture in Washington organized by the American Association for the Advancement of Science.
The tech world is slowly catching up to this insight as well. From automated voice systems that greet callers by saying, "Hi, this is Amtrak. I'm Julie!" to sophisticated programs that can register human emotions, applications of "affective computing" are growing rapidly.
Marketers see a gold mine in this research, which holds the promise of increasing sales in the same way that cheerful and helpful salespeople at a store are more likely to sell mechandise than are clerks who are surly.
At the same time, the work raises troubling ethical questions. They range from whether it is deceitful to encourage people to interact with technology as if it were human to deeper concerns about what it would mean if computers could really form emotional "relationships" with people.
Today, such concerns seem remote, because most technologies are almost deliberately antisocial -- computers do not respond to emotional cues such as frustration, anger or anything else -- and regularly act "inappropriately." (What person, other than one of Arnold Schwarzenegger's movie characters, would ever say, "You have performed an illegal operation"?)
In one familiar example, cited by Picard: You're on deadline. A character barges in when you are very busy. It offers useless advice and does not notice when you get annoyed. It is impervious to hints. You explicitly tell the character to go away and, in response, it winks and dances a jig.
Picard flashed a slide of the ubiquitous Microsoft Office Assistant, the paperclip icon with the sly smile -- an example of a program oblivious to a computer user's emotions. Picard's research has shown that as annoyance with a computer grows, people grip the mouse more tightly and tense up in their chairs. Other studies have found that large numbers of people have kicked their computers or hurled abuse at them.
Scientists are responding in two ways to demands for "emotionally intelligent" computing. The first involves designing ways for a computer to read a person's emotions. Special sensors on seats can deduce from a person's posture whether she is interested or bored. Other sensors measure heart rate to tell when someone is stressed; a camera can determine whether a brow is furrowed. Through complex computer processing, explained Karen Liu, a graduate student in Picard's lab, these signals are registered as signals of confusion or frustration.
"In a way," Liu said, "we are giving machines eyes and ears."
Other software can then respond appropriately. At the MIT Media Lab, which studies how electronic information overlaps with the everyday world, robots are being programmed to help people recognize when they are stressed and to remind them to relax and avoid repetitive-strain injuries.
© 2004 The Washington Post Company