The proliferation of bots equipped with artificial intelligence has humans interacting with machines in more emotionally charged situations than ever before. Consider every time you call your bank, as I did yesterday to report a missing debit card.
How can we help you? You can say log-in support, account access, bill pay, credit card or more options.
Sorry, I didn’t quite get that.
I’m sorry, I still didn’t get that.
The frustration of losing my debit card combined with the frustration of this stumped, automated voice had me foaming at the mouth. The rational side of my brain understood that the other end of the line was an algorithm and, for some reason, my words just didn’t compute. But in a moment of need, that didn’t matter. I had a problem, dammit, and it was the algorithm’s job to fix it.
The frustrated-customer scenario is a textbook example that researchers provide to explain why robots need to be more attuned to human emotion, as fleeting and fickle as it can be. If our voices become louder or sound depressed, or our faces look perplexed or angry, robots should be able to detect that and respond accordingly. That notion is behind a burgeoning research area within artificial intelligence.
“We’re trying to get systems to have some of that capability so they can provide a more human-like conversation and really understand the user better. We have been working for a long time on how to understand user state from various signals, including speech,” said William Mark, president of the information and computing sciences division at SRI International.
SRI International may be best known as the research juggernaut that invented Siri, the voice-powered virtual assistant that was spun off into a stand-alone company and sold to Apple in 2010. Ever since, Siri has taken orders (with mixed results) from the millions who use the iPhone and other Apple products.
The Menlo Park, Calif.-based firm has now created a platform called SenSay Analytics, which parses through the words, tone, volume, pitch and other characteristics of the human voice to better recognize whatever emotion the speaker may be feeling. A robot can then respond accordingly by, say, offering an apology when you’re angry, speaking more quickly when you’re impatient, or ushering you to a human being when you’re really ready to blow.
“What we’re capable of now is really to provide real value in lots of real-life situations. That’s not to say that it has the emotional understanding and other forms of human understanding that people do, but it’s certainly at a state where useful things can be done,” Mark said.
SRI International is one of many companies looking to advance artificial intelligence and robotics in an effort to better meet human needs. Just this week, some of the biggest names in tech formed the Partnership on Artificial Intelligence to Benefit People and Society. Google, Microsoft, Facebook, IBM and Amazon say the alliance is designed to conduct research and share information on how to move the technology forward without compromising ethics or transparency. SRI and, more notably, Apple, are not members.
“A big part of the intelligence that humans have when they communicate with each other [comes from] listening to a lot of different cues,” said Elizabeth Shriberg, a principal scientist in the speech technology and research laboratory at SRI. “People expect listeners to pick up on those cues.”
But even humans often miss or simply ignore those cues, as anyone with a teenage child, husband or wife, boss or mother-in-law can certainly attest. If humans often do a poor job of reading one another’s emotions, can we expect a bot to do much better?
The answer is debatable and ultimately may not matter. Shriberg said the current “gold standard” is not whether bots are more effective than humans, but whether they are as effective as humans. Accuracy tests often involve several people and a bot communicating with the same human subject, and then require each to assess the subject’s emotional state. If the bot’s perception matches that of most of the humans, that would be considered a success whether it’s right or not.
In short, at times bots will inevitably get it wrong.
“A machine is not going to magically tell us how we feel. We don’t even know how we feel all of the time,” Shriberg said. “But there is a lot of low-hanging fruit where it’s really clear that someone is angry … or someone is depressed.”
As with other forms of artificial intelligence, robots can become smarter about a person’s emotions over time, Mark said. That means your smartphone, computer or another piece of machinery you interact with regularly may come to understand when you’re having an off day, if you’re becoming confused more often, or if you seem more down than usual. As the technology becomes more precise, it’s not hard to imagine possible applications in health and other areas.
“It’s very helpful to have history or experience with an individual. The kinds of things the person wants, the kinds of responses they like,” Mark said. “If you’re using some piece of technology, say a virtual assistant, then it’s going to get a lot better at understanding you.”
Read more from The Washington Post’s Innovations section.