Can machines think?
In 1950, famed London scientist Alan Turing, considered one of the fathers of artificial intelligence, published a paper that put forth that very question. But as quickly he asked the question, he called it “absurd.” The idea of thinking was too difficult to define. Instead, he devised a separate way to quantify mechanical “thinking.”
“I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words,” he wrote in the study that some say represented the “beginning” of artificial intelligence. “The new form of the problem can be described in terms of a game which we call the ‘imitation game.'”
What he meant was: Can a computer trick a human into thinking it’s actually a fellow human? That question gave birth to the “Turing Test” 65 years ago.
This weekend, for the first time, a computer passed that test.
“Passing,” however, doesn’t mean it did it with flying colors. For a computer to pass the test, it must only dupe 30 percent of the human interrogators who converse with the computer for five minutes in a text conversation. In the test, it’s up to the humans to separate the machines from their fellow sentient beings throughout their five-minute inquisition. (Gizmodo has a pretty good breakdown of how the test works.)
This go-round, a Russian-made program, which disguised itself as a 13-year-old boy named Eugene Goostman from Odessa, Ukraine, bamboozled 33 percent of human questioners. Eugene was one of five supercomputers who entered the 2014 Turing Test.
“We are proud to declare that Alan Turing’s Test was passed for the first time on Saturday,” declared Kevin Warwick, a visiting professor at the University of Reading, which organized the event at the Royal Society in London. “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human.”
There is some cause for concern, however. For starters, convincing one-third of interrogators that you’re a teenager who’s speaking in a second language perhaps skews the test a bit. Was the computer that smart? Or was it a gimmick?
And then there is the concern that such technology can be used for cybercrime.
“The Test has implications for society today,” Warwick said in a university news release. “Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime. . . . It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true . . . when in fact it is not.”
Indeed, if the optimism of Eugene’s programmers is any guide, we may be headed for a scenario not dissimilar to “Her” — the 2013 blockbuster that depicted a complex man falling in love with his computer.
“Going forward we plan to make Eugene smarter,” Vaselov said, “and continue working on improving what we refer to as ‘conversation logic.’ ”