“Think of it as a fully autonomous agent,” Skuler said. “You tell it what your goals are, and it tries to measure how you’re doing on those goals and suggests activities accordingly to help you meet those goals.”
Advancements in artificial intelligence have given rise to in-home virtual assistants, devices that listen and respond as we can command them to turn off the lights, purchase items online or order restaurant takeout. Amazon Echo and Google Home, two popular systems, can now be found in millions of homes.
ElliQ (pronounced L-E-Q) represents a new role for these technologies: proactively recommending ways in which humans could be living better lives, from getting more exercise to watching informational videos. Humans may not be taking direct orders from their technology, at least not yet, but it nevertheless suggests an emerging relationship where smart devices wield even greater influence over our decisions.
“If we’re focusing just on virtual assistance, I think so far the interaction has been very much human-initiated,” said William Mark, president of information and computing sciences at SRI International. “I put it that way because if we broaden the perspective, of course there are lots of examples of machines telling us what to do.”
Indeed, machines prod humans all day. Your alarm rings to keep you from sleeping through a morning meeting. Your car beeps when you’ve started the engine but haven’t clipped your seat belt. Your Netflix account suggests movies to watch based on your viewing history.
Virtual assistant robots are different in that they have a broader view of our daily lives and are designed to help us accomplish tasks. They can already learn when we typically wake up and go to sleep, what we watch on television and what we purchase online. As the devices become capable of doing even more, they will store and analyze that information, too.
The key is that we invite those technologies to nag us and that we have control over them. We set the alarm clock ourselves — and have the power to hit snooze.
“We have a whole set of words for talking about this in English: persuade, hint, advocate, encourage,” Mark said. “There’s all kinds of things that have a wide variety of implications and very different feelings that are generated by it.”
ElliQ monitors the user’s movements and learns their patterns to ensure its suggestions are well-timed, Skuler said. The user might prefer to take walks in the morning rather than after lunch or value quiet time in the evening over listening to music.
Currently, ElliQ is programmed with seven goals that the user can choose among, such as learning something new each day, being more physically active or communicating with family more often. The company sets one of the goals for you: developing a “positive affinity” for the robot.
“Meaning we don’t annoy you to the point you unplug us,” Skuler said.
Developing machines that can persuade people to act in a certain way is both a technological and psychological challenge, Mark said. Even humans struggle to know when advice will be well received and deliver it in a way that actually motivates the recipient.
“The system has to hit it just right in terms of giving you the information you need at just the right time without annoying you,” Mark said.
“You want to think that virtual assistance cares about you or has your best interest at heart,” he added.
That may seem like a tall order considering the robot does not, in fact, have a heart. But it’s not uncommon for people to develop bonds, irrational as they might seem, with technology and other personal inanimate objects. It’s why we give names to our cars or yell at our malfunctioning computer, for example.
Virtual assistant systems can take many cues from the way humans engage one another, said Justine Cassell, director emerita of the Human-Computer Interaction Institute at Carnegie Mellon University. In her research, Cassell programs robots to replicate common features of human conversations that help people to establish trust. For example, the machine might divulge information about itself before asking the human for information — creating a sense of equality and transparency in the process.
The technology got a trial run at a meeting of world leaders in Davos, Switzerland, this year. Attendees had conversations with the system, which then recommended conference sessions they would enjoy or fellow attendees they should meet. In most cases, the attendees accepted the recommendations, Cassell said.
“It’s not empty chit chat,” she said of the conversations between humans and machines. “On the contrary, it greases the wheels of task interaction by making people comfortable, making them trust the system, and making them disclose information that allows the system to do a good job.”
Of course, as virtual assistants gain greater influence, it’s easy to conjure up dystopian scenarios in which technology starts to actually exert authority. It’s one thing for a system to suggest you go for a walk after watching television for hours and another for a system to power off the TV until you’ve complied.
“The machines that we interact with need to be designed to keep sight of allowing people to maintain that very important sense of autonomy, that they are in control of their existence,” Cassell said.
Read more from The Washington Post’s Innovations section.