Beyond the Turing Test lies the far more challenging "Is this chatbot just replying in Styx lyrics?" test.
Beyond the Turing Test lies the far more challenging “Is this chatbot just replying in Styx lyrics?” test.

EXCITING NEWS, kind of!

A chatbot in the UK named “Eugene Goostman” managed to dupe a human judge into thinking it was a 13-year-old boy! (Oh no, we just said it was a chatbot! Now we’ve blown its cover!) That is to say, it may have passed the Turing Test, which asks whether a machine can convincingly imitate human intelligence to a degree that fools someone who looks at the conversation without knowing which participant was human.

If you are anything like me, your initial response to all the breathless headlines about the Rise of the Machines was somewhat dismissive. “Pssh,” you said. “I dated a guy for months before realizing he didn’t pass the Turing Test. And not online! After being reared by Scandinavians, I actually thought he was somewhat TOO emotionally attached.”

And if you are skeptical or underwhelmed, you are more right than not.

The chatbot in question was posing as a Ukrainian boy (whose father is a gynecologist and who is a proud owner of a pet gerbil) who could not reasonably be expected to supply answers to — well — much of anything. And even then, Eugene only fooled one in three judges. Which is to say, one judge. There were only three judges. This did pass the threshold of 30 percent required by the contest, though.

The one frontier we’ve truly crossed is that formerly we required our chatbots to convincingly imitate human intelligence. What we should have been imitating was human ignorance, making them pose as people — like 13 year-old Ukrainian boys, or the average celebrity — who could not possibly have good answers to your questions. What sets robots pretending to be people on the Internet apart from people pretending to be people on the Internet (and people pretending to be robots on the Internet, in some role-play circles) is that the robots have better spelling. How it took us so many years to realize that human intelligence is actually comparatively rare and what the machines should have been imitating was human ignorance, I have no idea — except the usual explanation that more people are idiots than we realized.

Often, in fact, what allowed judges to identify the humans in previous Turing Test-based imitation trials was that they made more spelling errors than the machines.

As Martin Robbins writes at Vice UK:

Researchers in machine-learning often talk about “Strong AI” versus “Weak AI”. Strong AI is what you’d imagine; a sentient machine, general in purpose and knowledge – think Data from Star Trek, or HAL from 2001, or The Machine from Person of Interest. In contrast, Weak AI is more narrow; it has no real intelligence or awareness and relies on fairly specific tricks and techniques to solve a particular problem – think Siri, or predictive texting, or Google’s news clustering algorithms.

Turing devised his test with Strong AI in mind. He believed that sentience and information integration in some sort of “conscious” mind would be necessary for a computer to achieve a meaningful dialogue with a human, and that this mind would need to be connected to some way of experiencing the world – perhaps through a mechanical body: “In the process of trying to imitate an adult human mind, we are bound to think a good deal about the process which has brought it to the state that it is in.”

And the state that the human mind is in is, well, a little embarrassing. That’s why we prefer to interact with Weak AI like Siri. Siri still has better grammar than most of our chat contacts.

Even if the passing score is upheld, this doesn’t, of course, mean that a machine can think. Then again, what human can? Just look at the headlines we’ve used to share the news about ‘Eugene Goostman’ so far. So much for intelligence, artificial or otherwise.

Alexandra Petri writes the ComPost blog, offering a lighter take on the news and opinions of the day.