More news from:  Science  |  Environment  |  Health
How & Why

Artificial intelligence makes some progress, but robots still can't match humans

(Oliver Burston For The Washington Post)
  Enlarge Photo    

Network News

X Profile
View More Activity
By Brian Palmer
Special to The Washington Post
Monday, December 20, 2010; 7:07 PM

Computers these days have serious human envy.

When you call your bank, the robot on the other end doesn't want you to communicate using your touch-tone keypad anymore. No, it insists that you just speak to it, sometimes even adding, "You can use a wide variety of words." What a showoff.

Your car is trying to emasculate you by taking over the parallel parking duties. And computers have long since drained all the fun out of chess.

Fortunately, most robots aren't the complicated emotional beings that star in movies, and we're still pretty good at identifying android impostors. Even if you don't recognize the stilted robotic diction over the phone, they usually give themselves away when they can't understand a thing you're saying. But how long will it be before you have an entire conversation with a machine without realizing it?

This isn't just cocktail party chatter; it's the long-term goal of artificial intelligence research. Alan Turing, the man many identify as the father of AI, in 1950 defined an intelligent machine as one that could masquerade as a human.

Even without having to talk or understand the spoken word, there isn't a machine that can pass the Turing test. Truly humanlike intelligence has frustrated AI researchers because it involves two skills that machines are bad at: perceiving their environment and usefully incorporating past experiences into their knowledge base.

Think, for a minute, about what it takes to recognize a can of soda sitting in your refrigerator. The photons bouncing off the scene in your refrigerator are recorded on your retina. The optic nerve translates the image into electrical signals and carries them to your brain. So far, so good for the machines. Digital cameras have long been able to capture photons and store them as transmittable electrical signals.

The next step, though, is a bridge too far for most robots. Your brain manages to pick out the can from the rest of the scene, even though every time you see a soda can, it looks a little bit different. Your brain has what researchers call an internal representation of a soda can, so even if the lighting is different or the background changes or the can is a slightly different size, you still recognize it. It takes an incredible amount of computing power, plus the ability to filter out extraneous details, to make this happen.

Computers are slowly acquiring the skill. Google, for example, is working on an "omnivorous search box" that can recognize images and sounds recorded on a smartphone. But the technology remains in its infancy.

Building a knowledge base is even more difficult for a machine. John Laird, a professor of computer science and engineering who studies artificial intelligence at the University of Michigan, analogizes computers to the main character in the 2000 film "Memento," who cannot make memories as he tries to figure out who murdered his wife.

"Most AI systems," says Laird, "do not have episodic memories. They don't make continuous records of their pasts." Like the lead character in "Memento," they are what Laird calls "cognitive cripples." While they can store information, they can't learn the way a human does.

Even if we could construct computers with enough memory to store decades' worth of conversations, novels, meals and lectures, no one has figured out how to teach a machine to catalogue and access those memories quickly.


CONTINUED     1        >

© 2010 The Washington Post Company

Network News

X My Profile