A computer-controlled arm in front of schematics in the Modular Prosthetics Lab at the Johns Hopkins University Applied Physics Laboratory. (J.M. Eddins Jr. for The Washington Post)

Each week, In Theory takes on a big idea in the news and explores it from a range of perspectives. This week we’re talking about robot intelligence. Need a primer? Catch up here.

Murray Shanahan is professor of cognitive robotics at Imperial College London.  He served as a scientific adviser for the 2015 film “Ex Machina,” and his book “The Technological Singularity was released this August.

I have a confession. When it’s just me in the car, I sometimes talk to the GPS. When she (the voice is female) tells me to turn left, I can’t resist a comeback: “I know it’s left! I’m not a complete idiot.” Sometimes, if I change my mind about where I’m going and can’t be bothered to cancel the route, we argue all the way home. Of course, I know that no one is listening. No one is actually there. But the human brain seems to be hard-wired to see intelligence where there is none: to see faces in clouds, to imagine a teddy bear as alive. Perhaps this is why some people so readily assume that human-level artificial intelligence will soon be with us.

As AI technology becomes more sophisticated, this illusion of intelligence will become increasingly convincing. Computers already know a great deal about us – our needs and preferences, our likes and dislikes – and the conversations we hold with them are becoming ever more natural and realistic. Using AI technology, computers can help us plan our social lives and organize our work schedules. They will soon become our advisers and confidantes, like the wise and trusted servants of a bygone age.

[Other perspectives: Do we love robots because we hate ourselves?

Driven by advances in machine learning and other areas of computer science, AI applications like these are sure to have a dramatic economic impact over the next 10 years or so. But we should be careful not to ascribe too much intelligence (let alone consciousness) to the increasingly sophisticated AI-enabled devices and applications that populate our world. None of this technology comes anywhere near human-level intelligence, and it is unlikely to approach it anytime soon.

The remarkable thing about human intelligence is its generality, its ability to ensure the welfare of the human animal in an endless variety of environments. A 21st century web developer is born with essentially the same intellectual equipment as a Stone Age hunter-gatherer. Yet the human brain’s billions of neurons constitute a generic learning device, one that can adapt extraordinarily well to whatever world it finds itself in. And human intelligence isn’t merely adaptive. It is also inventive. This is why the 21st century has the Internet and the smartphone while the Stone Age had to make do with flint tools.

To endow a computer with human-level intelligence it will be necessary to match the adaptability and inventiveness of the human brain. Right now, however, we have little idea how to do this. Most of the obstacles seem to begin with the letter C: common sense, creativity, concepts (especially the abstract variety). We still lack a deep theory of how these aspects of intelligence function, the sort of theory that might underpin their replication in a computer. And as for consciousness, well let’s not even go there.

These limitations don’t mean that human-level AI is impossible, however, or that its prospect is too remote to command our attention today.

In the end, human intelligence is the result of processes that are governed by the laws of physics. Eventually we will understand those processes well enough to recreate them and perhaps to improve on them. If and when it arrives, human-level AI could bring about an era of unprecedented well-being and abundance, dramatically advancing science, engineering and medicine. But how will it reshape our world? Will it be safe? Will it be just a tool for the benefit of humanity, or will it take on a life of its own? The time to start thinking through the consequences is now.

Explore these other perspectives:

Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk

Francesca Rossi: Can you teach morality to a machine?

Dileep George: Killer robots? Superintelligence? Let’s not get ahead of ourselves.

Patrick Lin: We’re building superhuman robots. Will they be heroes, or villains?

Ari N. Schulman: Do we love robots because we hate ourselves?