The latest plan to take robots out of the factory and into the home comes from British superstar innovator James Dyson, who is investing nearly £5 million in a new program at Imperial College London to make his dream of intelligent domestic robots a reality. The near-term goal is to create computer vision programs that will enable robots to navigate the real world and process visual information in real time. These systems would eventually enable robots to become intelligent domestic servants capable of carrying out complex tasks.
Dyson is not alone in championing an expanded role for robots. Google, which acquired eight different robotics companies last year, including Boston Dynamics, also appears to be making a concentrated push into the innovation space that includes robotics and artificial intelligence. Both NASA and DARPA are working on new robot prototypes. Japan, which has always been a hotbed for robotics, has the Twendy-One robot, which will be capable of obeying voice commands, cooking, and taking care of the sick or elderly.
However, for this vision of an intelligent home robot to come to fruition, a number of problems still need to be overcome by robotics innovators. Sure, robotic vacuum cleaners such as the Roomba from iRobot are capable of cleaning floors – but they do so because of sophisticated mathematical algorithms and sensors that help them avoid objects, not because they actually “know” what they’re doing or seeing. Robots, for all the tricks they are now able to perform, actually have very little ability to interact with the world around them in any kind of sophisticated way outside of highly-controlled environments with repetitive tasks.
The Holy Grail for innovators is the ability to create a true robotic vision system that helps robots understand the world around them. The technology is known as “simultaneous localization and mapping” (SLAM), which is the ability to create a 3D map of a space you’ve never visited before, or to generate an updated 3D map of a familiar space. It’s essentially what humans are able to do every time they enter a room of a house. Humans can recognize that there are a bunch of books scattered on the floor, or that the couch has been moved to another part of the room. It’s why humans are able to vacuum a room so quickly, while robots have such a tough go of it, relying on a complex mix of cameras and sensors to keep them from running into things.
Which is why Google may have an insurmountable lead in the race to create a true intelligent domestic robot. Not only does the company have the R&D smarts to create and develop robotic animals (like those from Boston Dynamics) capable of performing impressive physical feats, but also Google has new augmented reality technology in the form of Google Glass and new autonomous driver technology.
Imagine the possibilities when you combine Google Glass, autonomous driving technology and a robotics vision system. You might get something like a Robocop – a robot capable of seeing and interacting with the world around it and then chasing down the bad guys, even in a densely populated urban area. This robot would have a wireless facial recognition system capable of spotting faces in a crowd, the ability to video record action in real time, and the ability to outrun, outshoot and outthink potential criminals. While this might sound far-fetched, consider that the New York Police Department is already experimenting with Google Glass, while the self-driving car is fast becoming a mainstream technology.
What it all means is that robots are likely moving out of the factory and into the home. From the home, they will take on even more roles, perhaps related to law enforcement. In the process, they will be taking over more jobs from humans. They are already the factory workers of the future. Soon, they will become the new caretakers and nurses, the new bodyguards and night watchmen, and the new maids. Let’s just hope that they’re happy with these limited roles and don’t decide to stage a robot uprising to take over even more jobs from humans.