The core team behind the Robo Brain Project (Saxena/Cornell)

Everything your future robotic servant needs to know, he can learn from Google. Cornell's "Robo Brain," which went online in July, is successfully perusing around one billion images, 120,000 YouTube videos and 100 million how-to manuals. Using these downloaded materials as a guide, the Robo Brain will develop a complex understanding of what different objects are and how to interact with them successfully. In the video below, for example, the robot makes an affogato:

The robot pictured has used online imagery to learn how to pour coffee, scoop ice cream and squeeze out flavored syrup, as well as how to put them all together to make a delicious beverage for its human patron.

That's just the beginning. The system connects different objects and actions to each other based on what the robot has learned. The robot learns how coffee and mugs interact -- that full coffee cups need to be kept upright, unlike empty ones. And the system can figure out how humans interact with objects, too. In the video below, a robot learns that it should never come between a human and his television.

Eventually, this system could help train robots to interact with humans and the objects and appliances we use daily. Because of the use of the Web, robots could conceivably learn how to use new products and react to new situations all the time. You can find out more (and help train the Robo Brain by weighing in on the information it's "learned" so far) at the Robo Brain Web site.