Always wanted John Legend to make your restaurant reservations for you? Terrifically or terrifyingly, there’s an app for that.
All of which has led Silicon Valley-savvy commentators online to ask the question: What the heck are we getting ourselves into?
The robot future, it’s worth pointing out, isn’t exactly here yet. Duplex can only chat in “closed domains,” which means it has a set purpose and a set script to achieve it. When testing begins this summer, the program will make restaurant reservations, schedule salon appointments and ask about holiday hours.
Still. Google has developed the capability to mimic human speech beyond detectability. And although the company told the Verge after the fact that it does believe it has a responsibility to inform individuals they have a robot on the line, attention to that duty was absent from Wednesday’s demo presentation before an entranced audience. Instead, the ability to deceive seemed more of a prized feature than an ethical stumbling block.
Google’s technological leap carries concrete implications whether or not the company (or others who seize on the invention as it improves and spreads) requires its robots to identify themselves as artificially intelligent.
For one thing, there’s the widening disparity between low-wage workers in the service industry who have to interface with machines and the higher-wage workers who continue to reap the benefits of talking human-to-human in the office.
Then there’s the hoaxing that can happen on both ends of the human-to-nonhuman equation. Our instinct is to imagine how an automated actor could scam us, contacting targets in droves to tell them to, say, invest in an exclusive timeshare — or wire money to a certain number if they want their sister’s life spared. But a human could scam a robot, too, gaming its preprogrammed processes to extract information from the trove of knowledge a computer stores about its user.
There’s also the ineffable. Many of us recoil at hearing we soon might speak to robots without realizing it not because we’ve thought through the consequences but because it just feels … icky. Day-to-day chitchat, after all, is one of the things that binds us, and every interaction is built on a foundation of trust that what we’re seeing is what we’re getting. Anything else, and we too feel scammed, even if nothing material has been lost. To begin to doubt what we’ve always taken for granted above all else — that the people moving through the world alongside us are also people — is to lose a core part of the human experience as we know it.
All this hints at a tipping point in our artificial intelligence obsession. Until now, we haven’t worried enough that robots could become human enough to trick us. On the contrary, we’ve tried to trick ourselves: Companies work to create more and more realistic robots, and consumers buy the best they have to offer, from Siri to Alexa. Sometimes, it’s about efficiency. Other times, it’s about the pleasure of interacting with artificial intelligence that doesn’t seem, well, too artificial — as long as we still know who’s the human and who’s the machine.
Our machines so far have been at once humanlike and machinelike enough to set us at ease and to spare us from answering the hardest questions about where advanced artificial intelligence fits into our societal codes. Even now, we’re nowhere near some sort of “Westworld” where robots skirt consciousness and where, for the humans who abuse them, the realness is the draw and the fakeness an excuse to behave outside the bounds of morality.
But science fiction looks less fictional every day, and quandaries it seemed we’d never have to confront appear in crowd-pleasing inventions at conferences. All of us, from companies to consumers to legislators, should start paying more attention. Otherwise, the robots might do it for us.