By Brian Palmer
Special to The Washington Post
Monday, December 20, 2010; 7:07 PM
Computers these days have serious human envy.
When you call your bank, the robot on the other end doesn't want you to communicate using your touch-tone keypad anymore. No, it insists that you just speak to it, sometimes even adding, "You can use a wide variety of words." What a showoff.
Your car is trying to emasculate you by taking over the parallel parking duties. And computers have long since drained all the fun out of chess.
Fortunately, most robots aren't the complicated emotional beings that star in movies, and we're still pretty good at identifying android impostors. Even if you don't recognize the stilted robotic diction over the phone, they usually give themselves away when they can't understand a thing you're saying. But how long will it be before you have an entire conversation with a machine without realizing it?
This isn't just cocktail party chatter; it's the long-term goal of artificial intelligence research. Alan Turing, the man many identify as the father of AI, in 1950 defined an intelligent machine as one that could masquerade as a human.
Even without having to talk or understand the spoken word, there isn't a machine that can pass the Turing test. Truly humanlike intelligence has frustrated AI researchers because it involves two skills that machines are bad at: perceiving their environment and usefully incorporating past experiences into their knowledge base.
Think, for a minute, about what it takes to recognize a can of soda sitting in your refrigerator. The photons bouncing off the scene in your refrigerator are recorded on your retina. The optic nerve translates the image into electrical signals and carries them to your brain. So far, so good for the machines. Digital cameras have long been able to capture photons and store them as transmittable electrical signals.
The next step, though, is a bridge too far for most robots. Your brain manages to pick out the can from the rest of the scene, even though every time you see a soda can, it looks a little bit different. Your brain has what researchers call an internal representation of a soda can, so even if the lighting is different or the background changes or the can is a slightly different size, you still recognize it. It takes an incredible amount of computing power, plus the ability to filter out extraneous details, to make this happen.
Computers are slowly acquiring the skill. Google, for example, is working on an "omnivorous search box" that can recognize images and sounds recorded on a smartphone. But the technology remains in its infancy.
Building a knowledge base is even more difficult for a machine. John Laird, a professor of computer science and engineering who studies artificial intelligence at the University of Michigan, analogizes computers to the main character in the 2000 film "Memento," who cannot make memories as he tries to figure out who murdered his wife.
"Most AI systems," says Laird, "do not have episodic memories. They don't make continuous records of their pasts." Like the lead character in "Memento," they are what Laird calls "cognitive cripples." While they can store information, they can't learn the way a human does.
Even if we could construct computers with enough memory to store decades' worth of conversations, novels, meals and lectures, no one has figured out how to teach a machine to catalogue and access those memories quickly.
For an example of how data management is every bit as important as raw computing power, consider Deep Blue, the computer that in 1997 defeated grandmaster Garry Kasparov.
In theory, a computer with enough computing capacity should be able to beat a human in chess. It could play out every possible sequence of moves and always make the best choice. But no computer can do those computations fast enough. Deep Blue saved time and RAM by making decisions about which moves were worth considering and which could be ignored. In other words, Deep Blue used a form of reason, and not just superior processing speed, to beat Kasparov.
The field boasts other recent accomplishments. The Pentagon's Defense Advanced Research Projects Agency, or DARPA, offered a $2 million prize in 2007 for a robotically driven car that could merge, park and pass in traffic as well as a human driver could. (Or, hopefully, better, given the way some people drive on the Beltway.) While the teams competing for the prize were given the map of the urban course in advance, their robots didn't learn exactly what it had to do within the course until five minutes before the green flag. DARPA also clogged the course up with 30 human-driven Ford Tauruses.
Six of the 11 teams managed to complete their missions, although a few had minor scrapes. Inspired by that AI feat, Google has entered into a collaboration with Sebastian Thurn of Stanford, who won a previous DARPA challenge and came in second in this one, to develop robotic cars.
While robots are working in Iraq and Afghanistan, most of them are remotely operated by humans. The Department of Defense is working with Laird on a robot that can enter a house before soldiers or monitor a perimeter while humans are inside. Humans will train the robots ahead of time by walking them through model buildings. The robot will project what it sees onto a tablet computer, and trainers can point to objects on the screen and give the robot such simple commands as "open." On the battlefield, soldiers would be able to turn the robot loose and let it work.
Of course, Laird's proposed robots are a far cry from James Cameron's Terminator, and Thrun's winning robo-vehicle is a long way from Kit of "Knight Rider" fame. It's going to be decades before a robot passes the Turing test. Engineers such as Victor Zue of MIT are working on startlingly lifelike digital human images that will tell people that their flight is delayed. But there's nothing of general applicability out there. So don't expect to be employing your own robotic housemaid anytime soon.
Palmer, a freelance writer based in New York, also writes for Slate.com.