Go to Main Story
Spacer
Spacer

In Chess Battle, Only the Human Has His Wits About Him

By Joel Achenbach
Washington Post Staff Writer
Saturday, May 10, 1997; Page C01

The greatest chess player the world has ever known is struggling to defeat a machine. It's another wonderful opportunity for the human race, as a species, to engage in collective self-loathing.

When astronomers discovered the true vastness of the universe it became intellectually fashionable to ridicule our planet as an unimportant and infinitesimal speck of schmutz, upon which our own eye-blink existence was unworthy of mention in the glorious narrative of the cosmos.

Furrow-browed Darwinists are equally emphatic in their insistence that humans are not superior to other creatures, that our brain-to-body ratio is not the end result of a progressive evolutionary trend, but rather one of billions of freakish mutations that have allowed different species to adapt to, or thrive in, disparate environmental niches -- humans no more special in that regard than cockroaches.

Now comes IBM's chess-playing computer, Deep Blue, to inspire fear and groveling among people who otherwise would be described as highly intelligent. We are to believe that only Garry Kasparov, the brilliant Russian grandmaster, can save humanity from second-class cognitive citizenship.

The Guardian newspaper of Great Britain said Kasparov's job was to "defend humankind from the inexorable advance of artificial intelligence." Kasparov himself referred to his match last year with an earlier version of Deep Blue as "species-defining." Newsweek's May 5 cover story on the match set new records of portentousness with the headline "The Brain's Last Stand." The magazine declared, "How well Kasparov does in outwitting IBM's monster might be an early indication of how well our species might maintain its identity, let alone its superiority, in the years and centuries to come."

With more mushy-brained thinking like that, the human race doesn't stand a chance.

The truth of the matter is that Deep Blue isn't so smart. It does not for a moment function in the manner of a human brain. It is just a brute-force computational device. Deep Blue is unaware that it is playing the game of chess. It is unconscious, unaware, literally thoughtless. It is not even stupid.

"It's just like an adding machine. Or a pocket calculator," says John Searle, a philosopher who studies consciousness at the University of California at Berkeley. No one, he says, thinks of an adding machine as intelligent or conscious or as a thinking device.

"It's just a hunk of junk, it's just a device that manipulates symbols. Everyone thinks this has deep significance. I don't think it does. It's a nice programming achievement."

IBM can be proud of its accomplishment. For decades, the artificial intelligence (AI) community has dreamed of designing a machine that can beat the best human chess player. Many skeptics said it could never be done. Kasparov may not be defending the dignity of the species but he does provide an excellent benchmark for the progress of supercomputing technology.

This new version of Deep Blue -- a 1.4-ton RS/6000 SP supercomputer -- is clearly superior to the one that Kasparov decisively beat in February 1996. Kasparov, to the world's dismay, lost the first game of that match, saying afterward that unlike a human opponent, Deep Blue failed to become rattled when its king was under attack. But Kasparov quickly decoded the flaws and weaknesses of Deep Blue, and the machine never won another game. The new Deep Blue can perform at least twice as many calculations per second, however, and has been instilled with more chess history, carefully tutored by a grandmaster.

Kasparov won the first match last Saturday, but then stunningly lost the second on Sunday when, in the estimation of chess experts, he was "psyched out" by the machine's virtuosity. Kasparov resigned unnecessarily -- there was an obvious route by which he could have forced a draw, and maintained his lead.

Kasparov's failings were the mark of his humanity. He was unnerved and mentally exhausted by the skills of his silicon opponent. In the third game, with the advantage of the white pieces and the first move of the game, he seemed to have the computer beat, but Deep Blue countered tirelessly and forced a draw.

The fourth game Wednesday ended in another draw even though Kasparov appeared for a while to have the advantage. Now the match is tied 2-2 (draws are worth half a point), with two games to play, today and tomorrow. Experts wonder if Kasparov is too drained already to win the match. After Wednesday's draw he said: "I didn't manage well. I was very tired, and I couldn't figure it out."

So this is clear: Kasparov is not a machine. Deep Blue can't get tired, strung out, harried, nervous or zapped. The flip side is that Deep Blue won't be able to celebrate if it wins the match. It feels about this match as a thermometer feels about the weather.

Deep Blue manipulates 0s and 1s. It can analyze 200 million positions per second, compared with something like two a second for the average grandmaster. But the genius of someone like Kasparov is that he doesn't have to calculate all the possible permutations of a given game of chess. He knows what to ignore. He can draw on all his experience and intuit the best avenues of attack or defense.

Moreover, Kasparov not only plays chess, he also knows he's playing chess, and knows he is playing a machine, whereas Deep Blue neither knows it is a machine nor knows that Kasparov is a human. Kasparov can create a model in his head of Deep Blue's "personality" -- he can figure out the machine's bad habits. Then he can adapt. Machines aren't nearly as flexible and crafty as humans.

They never learn.

"For those of us who work in pattern recognition, machine learning or various fields allied with artificial intelligence, it is the weaknesses of Deep Blue that are the most interesting," writes computer scientist David G. Stork of Stanford University in an article posted on IBM's Kasparov vs. Deep Blue Web site, www.chess.ibm.com.

"The public should understand one of the central lessons of the last 40 years in AI research: that problems we thought were hard turned out to be fairly easy, and that problems we thought were easy have turned out to be profoundly difficult. Chess is far easier than innumerable tasks performed by an infant, such as understanding a simple story, recognizing objects and their relationships, understanding speech, and so forth. For these and nearly all realistic AI problems, the brute force methods in Deep Blue are hopelessly inadequate," Stork writes.

Humans still can outwit any computer when it comes to recognizing patterns, like familiar faces or voices. Gerald Edelman, author of "Bright Air, Brilliant Fire: On the Matter of the Mind," poses the question of what would a hunter prefer to take on a foray into the woods: an extremely advanced military computer that is easy to use and speaks English, or a dog? The hunter would prefer a dog. "The reason is that the dog has the ability to recognize pattern and novelty," Edelman said.

Deep Blue plays chess better than a dog, but only because human beings have carefully programmed Deep Blue to play chess. Left on its own, Deep Blue wouldn't even know to come in out of the rain, much less how to track a fox.

Hubert Dreyfus, a philosopher at Berkeley and author of the 1971 book "What Computers Can't Do" (an updated version is called "What Computers Still Can't Do"), argues that the old-fashioned, classical version of artificial intelligence never panned out. Computers can't become truly intelligent simply through advances in processing speed. You can't just fill up a machine with facts and declare it smart. What computers lack is what we call common sense -- the realization, for example, that it is easier to take a step forward than back, or that big things are harder to pick up than little things. A human learns all this from infancy, through trial and error, and it is not "knowledge" so much as a basic understanding of the world around us.

"A computer would have to be told explicitly all the stuff that we understand just because of the kind of beings that we are," says Dreyfus. "They don't have even the intelligence of a 3-year-old."

No one knows really how the brain works. Hardly anyone is a "dualist" anymore, arguing that the mind is independent of the brain. Instead most neuroscientists are "materialists," believing that everything we associate with the mind, including our most powerful emotions, is simply the product of the functioning of neurons. Within that framework, though, remains an enormous Romper Room in which scientists furiously debate how the brain operates. There are reductionists and anti-reductionists, pragmatists and mysterians. There are a few mavericks who argue for panpsychism, the theory that all matter contains some element of consciousness (which might mean a thermometer is not entirely unconcerned about the weather).

Everyone agrees that there is no one part of the brain that is conscious or intelligent. The brain is a raucous, untempered environment with a million things happening at once, consciousness emerging from the mix in the same way that wetness is an emergent property of a whole bunch of water molecules linked together.

An artificial brain -- a truly smart version of Deep Blue -- may be intrinsically impossible to build. The very question that everyone asks -- "Can we build a machine that thinks?" -- hints at the obstacle to such an achievement. A human brain builds itself.

A human brain may follow certain genetic blueprints, but it fundamentally is a self-designed, self-constructing system that interacts with its environment and rebuilds itself over and over in the first years of a person's life. For example, children with too little stimulation do not develop the mental wiring that they might otherwise.

The challenge for AI researchers is to build an environment from which a thinking machine can pull itself together, and become an evolving, learning, adaptive entity.

"We're pretty far from that right now. We're dealing with the A's, B's and C's of how the sensory information is organized in the brain," says Terrence Sejnowski, a computational neuroscientist at the Salk Institute in La Jolla, Calif.

Researchers now talk about designing "neural networks" rather than number-crunching supercomputers. A neural network is leaner and meaner. It is designed to recognize patterns, and figure out which pattern is good and which is bad -- the same kind of process a child goes through in learning how to walk, talk and interact with the world.

Far away though such a time may be, we might ask ourselves what civilization would be like if machines could really think. How would machines regard human beings? Would machines try to conquer the world? Would humans find themselves enslaved by the technology to which they had bequeathed consciousness?

One possibility is that the machines, in seeking world domination, would learn to be sneaky, just like humans. They'd learn to hide their true intentions. They might even write newspaper stories under human pseudonyms.

You know what the stories would say: Relax, don't worry, machines can't think.

© Copyright 1997 The Washington Post

Back to the top

Spacer

WashingtonPost.com
Navigation image map
Home page Site Index Search Help! Home page Site Index Search Help!