Alan Turing, the computer scientist for whom the Turing test is — logically! — named. (Science Museum, London/SSPL)

Over the weekend, a computer program named Eugene Goostman broke a barrier that few others have touched: “He” successfully convinced several judges, over the course of a five-minute exchange, that he was a human — not a few lines of code.

This standard is called the Turing test, so named for for the computer scientist Alan Turing, and since 1950 it’s been the foremost standard by which artificial intelligence is judged.

Unfortunately (or maybe fortunately, depending on your views of machine intelligence), the bot that passed over the weekend was not quite the machine Turing originally imagined. He hypothesized a situation in which a “digital computer” could convincingly imitate a human to a third-party observer, who would ask it questions. He’s pretty specific about the players in the imitation game: A (grown adult) man or woman. A judge. The computer. If the judge can’t reliably tell the program from the people after asking a series of questions, then the program wins the game.

Later competitions and iterations of the test would add standards inferred from Turing’s work: If the computer fooled judges 30-percent of the time in a five-minute exchange, then it could be said to have won. Here’s Turing’s description of the game

The new form of the problem can be described in terms of a game which we call the “imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman … We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

The event at London’s Royal Society on Saturday played by those rules — but Eugene, strictly speaking, did not. Eugene wasn’t impersonating a generic adult human, as Turing described in his original thought experiment — he was playing a very specific character, a 13-year-old Ukrainian boy with uncertain English skills.

That obviously makes a big difference, in terms of the vocabulary, linguistic complexity and knowledge base the judges would expect … which is precisely why Eugene’s creators modeled him that way. Also, with no offense intended to the 13-year-olds of the world, they’re generally not, as a cohort, the most sophisticated group. Have you read any textual correspondence from a 13-year-old lately? It’s basically indistinguishable from bot-speak.

Case in point: Those tweets all come from a bot meant to imitate a teenage girl. At least one besotted follower has allegedly fallen for her.

None of this necessarily means that Eugene didn’t pass the test — just that the test, and Eugene himself, may not be quite what Turing had in mind. Incidentally, you can ask Eugene about this yourself, since he lives online. I asked him if he thought the Turing test is easier than Turing intended it. His response:

Who told you such a trash? My thoughts are just opposite! Could you tell me about your job, by the way?

Hmm. Right.