Imagine a person conversing with two hidden entities, one a human being, the other a computer. If the person is unable to decide which is which, we shall be forced to admit that the machine has achieved human intelligence. That was the so-called Turing test, proposed by British mathematician Alan Turing as long ago as 1950.
He predicted that by the year 2000, computers would speak fluently enough to deceive an "average interrogator" at least 30 percent of the time after about five minutes of dialogue. This cautious prophecy may well become true. But will computers ever advance to a state at which their conversation, over a long period of time, will deceive even intelligent interrogators?
Today that question sharply divides mathematicians and Artificial Intelligence researchers. Many AIers believe that as computers grow in complexity and power it is only a matter of time until they become aware of their existence, with an intelligence that may even surpass ours. Mathematicians consider such predictions hogwash. To them the computer is no more than a tool for juggling numbers so rapidly that it is no longer necessary to waste hours making large calculations by hand; it is only speed, accuracy and flexibility that distinguish a brainless computer from a brainless abacus.
Grandmaster Garry Kasparov's recent defeat by the supercomputer Deep Blue has reawakened this often bitter controversy. My sympathies are with the mathematicians. Deep Blue's victory was a trivial event, long expected, that has added nothing of significance to the debate.
Exactly what do computers do? They are mindless machines designed to manipulate binary digits -- ones and zeros -- modeled by electrical impulses switched here and there along wires. The simplest example of such a device is the abacus. Ones are modeled by beads, zeros by empty spaces along rods. Switches are provided by fingers that slide the beads according to algorithms -- procedures that give instructions to the fingers. Muscles of the hand and arm furnish the energy. Of course, the power of an abacus is severely limited by the small number of rods and beads and by the long time it takes to operate the device.
Mechanical computers are more efficient. They can be made with cogwheels, with jets of water flowing through a network of tubes, with levers and pulleys, with balls rolling down inclines -- indeed with almost anything that can be manipulated by energy. I have a cardboard device, printed years ago as an advertising premium, that plays unbeatable tick-tack-toe by using a sliding strip and a rotating disk. Not long ago a group of clever computer hackers built a tick-tack-toe machine with tinker toys. In principle, a tinker-toy machine can do everything a supercomputer can do -- provided it is large enough and given enough time.
Supercomputers differ from mechanical calculators in only one fundamental way: By using electricity and tiny silicon switches to move ones and zeros through wire networks it gains incredible speed. If you call what it does "thinking," you might just as well say that the beads of an abacus are thinking while they add numbers.
A supercomputer's awesome speed enables it to answer mathematical questions no one could answer by hand. No human mind could have calculated, as computers have easily done, pi to millions of decimal digits. The fact that computer programs can play grandmaster chess is no more surprising than their ability to multiply gigantic numbers faster than any human lightning calculator. Deep Blue defeated Kasparov in a totally mindless way. It no more knew it was playing chess than a vacuum cleaner knows it is cleaning a rug. It cares not a whit whether it wins or loses.
Human chess players examine a few future moves at a rate of several per second, using experience and intuition to avoid considering irrelevant moves. Deep Blue examines all possible future positions for 10 or more moves ahead at a rate of 200 million positions a second. It is this fantastic speed, combined with "selectivity rules" for rating positions, that gives Deep Blue its enormous brute-force power. And it wins games. As is often pointed out, airplanes fly faster than birds but without flapping their wings.
For years, chess programs have defeated grandmasters when moves must be made rapidly. And the present checker champion of the world is a computer program. Checkers is so much simpler than chess that in a decade or two the game may be solved -- a program will play a perfect game.
Chess is far from being solved. But computers have passed a Turing chess test. A grandmaster cannot know whether his hidden opponent is another grandmaster or a computer program. But that achievement is a far cry from the complexity of human intelligence. "Complexity" has become a buzzword, precisely defined in computer science. Philosophers have broadened the term to apply to the evolution of the universe after it exploded into existence. Although the universe as a whole is increasing in entropy (disorder), there are regions here and there where disorder gives way to beautiful order in the emergence of ever more complex systems. The formation of galaxies, stars and planets are striking examples. On at least one planet, life has emerged and evolved in the direction of ever-increasing complexity, culminating in the brains of such bizarre creatures as you and me.
Now the notion that, as complexity increases, astonishing new properties emerge is as old as the ancient Greek thinkers. Wondrous properties appeared when atoms formed from quarks and electrons. Even more amazing properties emerged when atoms joined to make molecules. Hydrogen and oxygen are simple elements with simple properties. Put them together and you get water, a substance with remarkable attributes unlike those of either element, properties that may be absolutely essential for the emergence of life.
So will computers, of the sort we know how to build, ever rival the complexity of human intelligence? In 1988, Hans Moravec, who heads a robotics laboratory at Carnegie Mellon University, wrote a book titled "Mind Children: The Future of Robot and Human Intelligence" in which he predicted that computers would be surpassing human minds in less than half a century. Similar views, though with much longer time frames, have been advanced by AI researchers Marvin Minsky, Herbert Simon and Douglas Hofstadter. Philosopher Daniel Dennett is not in the least mystified by consciousness, believing he explained it -- and that computers will one day have it -- in his bestselling 1991 book "Consciousness Explained."
Moravec, physicist Frank Tipler and a few others actually believe that computers will eventually render the human race obsolete. They will become our "mind children," destined to take over the task of colonizing the cosmos as humanity goes the way of the dinosaurs.
Moravec is the strongest of what are called "strong AIers." He writes, "Today our machines are still simple creatures . . . . But within the century they will mature into entities as complex as ourselves, and eventually into something transcending everything we know."
The words have a familiar ring. Here is Samuel Butler writing in his 1872 fantasy "Erewhon": "There is no security against the ultimate development of mechanical consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing . . . . The present machines are to the future as the early Saurians to man."
The difference between Butler's remarks and similar sentiments by hard AIers is that Butler was writing satire.
Opposing these wild fantasies is a group of thinkers sometimes called "mysterians," and rightly so, because they believe that our minds remain a profound mystery. Here the mysterians, among whom I count myself, join the mathematicians who work with computers. Among the most outspoken mysterians are three American philosophers, John Searle, Thomas Nagel and Colin McGinn, and the British mathematical physicist Sir Roger Penrose. Penrose's two books, "The Emperor's New Mind" and its sequel "Shadows of the Mind," are the strongest attacks yet on the belief that computers will soon cross a threshold of complexity making them aware of who they are, able to feel pleasure and pain, to create and laugh at jokes, to love and hate, to make moral decisions, to write great poetry, music and novels, to make new scientific discoveries, and to meditate on philosophical and theological questions.
Most mysterians do not believe a "soul" exists apart from the brain. They accept the view that our "self," with its consciousness and free will (two names for the same thing), is a function of a material brain, a computer made of meat, as Minsky likes to say. They contend that our brains are so much more complicated than today's computers that we have only the dimmest comprehension of how they operate.
Neuroscientists are making progress, but as yet they do not even know how memories are stored and retrieved. Penrose simply insists that the abilities of the human mind are not going to emerge from computer complexity as long as computers consist of nothing more than electric currents moving through wires in a manner dictated by software. Until we know more about how our brains do what they do, we will not be able to construct computers that will come close to rivaling human minds. Penrose even contends that no such computers will be built until we know more about laws of physics deeper than quantum mechanics.
Deep Blue's defeat of Kasparov in no way signals the emergence among computers of anything faintly resembling human intelligence. What Deep Blue does is nothing qualitatively different from what an old-fashioned adding machine does. It merely twiddles numbers faster than a mechanical machine. Perhaps some day, if quantum computers are ever made operational, they will be on their way toward something resembling human thought.
I once wrote that Moravec suffers from having read too much science fiction. A prominent science fiction author took me to task for this remark, insisting that no one can read too much science fiction. What I meant, of course, was that Moravec took too uncritically the stories he read about the coming of intelligent robots. It is a long, long distance from the circuitry of Deep Blue to the mind of a mouse. Martin Gardner is the author of some 60 books dealing with science, philosophy and literature, the latest of which is "The Night is Large" (St. Martin's).