Self-driving cars may one day learn to drive as humans do, mastering roads through experience. (Google/EPA)

Google researchers have created an algorithm that has a human-like ability to learn, marking a significant breakthrough in the field of artificial intelligence. In a paper published in Nature this week, the researchers demonstrated that the algorithm could master many Atari video games better than humans, simply through playing the game and learning from experience.

“We can go all the way from pixels to actions as we call it and actually it can work on a challenging task that even humans find difficult,” said Demis Hassabis, one of the authors of the paper. “We know now we’re on the first rung of the ladder and it’s a baby step, but I think it’s an important one.”

The researchers only provided the general-purpose algorithm its score on each game, and the visual feed of the game, leaving it to then figure out how to win. It dominated Video Pinball, Boxing and Breakout, but struggled with Montezuma’s Revenge and Asteroids.

They began their work at DeepMind, a London start-up that was purchased by Google in January 2014. Since joining the company they’re exploring ways to weave this intelligence into Google products.

With plenty of Atari video games under their belt, the researchers will now move on to more complicated games with 3D environments. Hassabis expects the algorithm to crack these games within the next five years.

“Ultimately the idea is that if this algorithm can race a car in a racing game then also essentially with a few extra tweaks it should be able to drive a real car,” Hassabis said. “But that’s again, even further away than that.”

Google’s self-driving cars have driven hundreds of thousands of miles. But those miles are centered around its home in Mountain View, Calif., where the company has built extensive, painstaking maps. Data such as the height of traffic signals and the exact position of curbs is preloaded onto the car’s computer. As the car drives it compares its pre-installed map to what its sensors are seeing.

Building such maps for the entire country — not to mention the world — would be a mammoth undertaking. They would also need to be regularly updated.

A more appealing solution for Google would be for the car to develop a level of intelligence that’s so high, it wouldn’t need those preloaded maps. It could simply scan roads in front of it, and teach itself how to drive anywhere.

The algorithm is designed to tackle any sequential decision-making problem. Hassabis sees applications far outside video games and self-driving cars.

“In the future I think what we’re most psyched about is using this type of AI to help do science and help with things like climate science, disease, all these areas which have huge complexity in terms of the data that the human scientists are having to deal with,” Hassabis said.

Another potential use case be might telling your phone to plan a trip to Europe, and it would book your hotels and flights.

But that’s all a very long way away. For now Hassabis wants his algorithm to move another rung up the artificial intelligence ladder, and teach itself to master Starcraft and Civilization.