" 'In the knowledge lies the power' -- that was going to be the mantra for the '80s," Douglas B. Lemat of Stanford University told the throng in the Washington Hilton's International Ballroom yesterday afternoon. "But I like to say that the mantra for the mid-'80s is: 'In the knowledge acquisition lies the bottleneck.' "
Lemat was talking about computers, of course, and about the often frustrating business of trying to teach them how to gain knowledge by a more efficient process than having humans type lessons in on a keyboard. In one way or another, Lemat said, the learning problem looms as the most formidable obstacle in the field called Artificial Intelligence.
But as Lemat's enthusiastic audience--perhaps 1,000 strong--made clear, the obstacles have also been, to many people, incentives. During the third annual--and biggest ever--National Conference on Artificial Intelligence at the Washington Hilton this week, signs of success have been visible everywhere. As of yesterday, more than 1,850 people had paid $140 and up to attend. Companies with names like Symbolics, IntelliGenetics, Digital Equipment, General Electric and Xerox had set up elaborate displays in the exhibit room. And on the conference registration desk, supplies of the official bumper sticker ("Artificial Intelligence--It's for Real") were running perilously low.
Chess players, said Hans Berliner, a former world champion at correspondence chess and a computer scientist at Carnegie-Mellon University, like to talk about the obvious differences between the ways humans and computers handle the game.
"I've heard people make these pronouncements--that 'This must be a computer move,' or 'Only a computer would do that,' " Berliner said, "and I have the feeling that frequently it's a case of 20-20 hindsight."
So, for this week's conference, Berliner decided to construct a test--a series of games in which humans would have to guess the identity of their opponents. And sure enough, of the six humans playing other humans, five thought they were up against computers.
Nevertheless, Berliner said, there are clear differences. "The computers excel in complicated situations--where there are lots of possible captures and open lines, where there can be very dramatic changes very quickly. In a more closed, positional type of game, humans tend to do better."
Artificial intelligence is a hazily defined field with two related subdomains: the effort to design computer software and hardware to perform tasks requiring reasoning and perception; and the study of how reasoning and perception work.
"When I entered this field in 1967, it was a purely academic discipline," recalled Eugene Charniak of Brown University. "If you had told me that in 15 years artificial intelligence would be a profitable field with major industries wanting to invest, I would have said you were nuts!"
Charniak's particular preoccupation is the challenge of teaching computers to use "natural languages."
"While I would hope to see a computer system capable of speaking and understanding complete, unadulterated, no-holds-barred English in my lifetime, I would not be surprised if I didn't," he said over coffee in the downstairs snack bar Tuesday morning.
The problem, Charniak explained, is the difficulty of separating knowledge about language from knowledge about the world at large. Rules of grammar alone will not always tell you, for example, what a pronoun refers to.
He rattled off a brief tale by way of illustration: "Jack went to the supermarket. He found the milk on the shelf. He paid for it and went home."
Normally, said Charniak, "it" would refer to the last-mentioned inanimate object in the sentence--in this case "shelf." So "unless a computer has a great knowledge of the domain you're conversing about, it's not going to be able to handle complicated pronoun problems." Or, he added, to make sensible decisions about such issues as whether the word "bank," in a particular context, means the side of a river or a place where money is stored.
In the commercial world, natural-language programmers are finding it easy to sidestep the more profound issues that Charniak and others find so engrossing. In business situations, according to Gary Hendrix, head of research and development for a small Silicon Valley firm called Symantec, it isn't that important for a computer to understand all possible statements in natural language, so long as it shows a certain basic level of receptivity.
"It's not that different from talking to a foreigner or a child," said Hendrix. "You find out pretty fast what that person can understand. Humans can adapt to the lack of knowledge in a computer pretty quickly. People are pretty good at focusing down on some subset of the natural language. They're not so good at learning some completely foreign language."
Until recently, the few available natural-language programs required vast computer systems costing $1 million and more. "Now we're beginning to think very seriously about making this kind of technology available on a personal computer," said Hendrix.
That combination--the "user-friendly" qualities of familiar English and small, cheap computers--will spur tremendous further growth in the market, he predicted. It will "get computer power in the hands of lots and lots of people, and that, I think, is going to be very beneficial to our society."
Benjamin Kuipers of Tufts University has been trying to understand how people learn routes. If you ask someone for directions, said Kuipers, you often get an answer like, "I can take you in my car but I can't tell you."
Human knowledge often comes in such hard-to-define, incomplete packages, and this, Kuipers said, makes it hard to translate into computer-programming terms.
For his own example, however, Kuipers described a possible solution. The route can be broken down into a series of key decisions, each with its own "associative link" between "views" and "actions." The driver sees a particular view through the car window, then he remembers a particular turn he should make, and finally, to monitor his own performance, he looks for another view. And then the process starts all over again.
"Expert systems" are computer programs that draw conclusions and make recommendations based on complex data--much as human "experts" do. "It's the closest we can come to cloning," said MIT's Randall Davis, who has helped design a number of expert systems, including one called "Mycin" for the diagnosis of infectious diseases.
"We spent five years asking the doctors what they did," he recalled. "It was mostly a matter of looking at case after case and getting them to be very reductionist in their description of how they think."
Sometimes a doctor might say he was just using "intuition," but "it's because he really doesn't understand yet what he is doing," said Davis.
So far, expert systems have been developed for specialties as diverse as oil prospecting and diesel-locomotive maintenance. "What you've got to ask is, 'Who are the people you'd like to have more of?'," said Davis.
Traditional computer programs tend to do one step at a time, and to require precise answers. Expert systems, on the other hand, ask clusters of questions, and try to deal with imprecise answers.
"We want to say things like 'very likely' or 'probably' or 'maybe,' " Davis explained. "But we don't really know what we're talking about. Maybe we're talking about 'strength of belief' and how strong you believe something may not be the same thing as probability in a mathematical sense."
Although the medical diagnosis systems have looked very promising in trial situations, they have not reached the speed with which the best doctors "zero in" on a few likely diagnoses. "If you give a doctor a hundred symptoms," Davis said, "it's amazing how quickly they'll focus in on the five or six diseases it could be." During a break in the conference, Azriel Rosenfeld, a robotics and vision researcher at the University of Maryland, had a chat with an official of General Electrics.
Rosenfeld was curious to know if G.E. might consider helping fund the Center for Automation Research at the University of Maryland, which, as it happens, Rosenfeld founded.
The man from G.E. said such an arrangement was certainly possible, but his company had been burned in a few of these academic alliances. It had not always gotten any clear return on its investment. A university-based research facility might, for example, promise the donor a six-month notice on new discoveries. "But that's a lot of baloney," said the man from G.E. "You can hear most of this stuff a year ahead of time at conferences like this."