A woman looks at an InMoov robot as part of the Maker Faire Rome exhibition at La Sapienza University in Rome on Oct. 16. (Andreas Solaro/Agence France-Presse via Getty Images)

Each week, In Theory takes on a big idea in the news and explores it from a range of perspectives. This week we’re talking about robot intelligence. Need a primer? Catch up here.

Ari N. Schulman is a senior editor of The New Atlantis and an author of its Futurisms blog.

Like a mission to Mars, peace in the Middle East and fusion power, true artificial intelligence has been only a decade away for the past 50 years.

When you read headlines about the latest advances in “artificial intelligence,” the phrase is mostly a misnomer — a fancy term for statistical programming techniques that allow computers to perform tasks that don’t have step-by-step processes, like driving a car or beating Ken Jennings at “Jeopardy!” Innovations in this field have been rolling along for decades and show no signs of slowing. Limitations may remain, though — just ask Siri.

But the fantasy has always been that progress in conventional, weak AI will accumulate by degrees into true human-level AI — perhaps becoming self-aware on its own. Instead of remaining finely tuned software for highly specific tasks, true AI could learn to do anything. It could sense and act in the physical world, as gracefully as a dancer or a tennis player. It could think in the full meaning of the word: imagine, innovate, converse, connive. It could be like us.

[Other perspectives: We’re building superhuman robots. Will they be heroes, or villains?]

It’s the “like us” that has always underlain our fascination with AI. From the outset, the speculation has been a stand-in for old philosophical debates: over what free will is and whether it exists; what we are and what we should be; how we might be perfected or degraded.

This point applies obviously enough to sci-fi movies or books, but it’s also been true of the supposedly hard-nosed science. When Alan Turing set out to define machine intelligence in 1950, he devised a test not for whether machines possessed true intelligence, but whether they could convincingly imitate it. In doing so, he set a still-powerful precedent for how AI advocates would deal with those old philosophical debates — basically by shrugging at them: the “question ‘Can machines think?,’” he said, “I believe to be too meaningless to deserve discussion.”

Like today’s broader AI project, the Turing Test baked in huge philosophical assumptions, such as the idea, imported from the psychological school of Pavlov and B.F. Skinner, that the inner life of the mind is irrelevant to understanding behavior. But what’s most telling is that it didn’t acknowledge that these ideas were philosophical, or even really arguable. If “Can machines think?” is a meaningless question, then “Can humans think?” is too. And if that’s the case, human-level AI is achievable practically by default: The thing it aims to replicate is essentially an illusion to begin with.

Even as the significance of the Turing Test has been challenged, its attitude continues to characterize the project of strong artificial intelligence. AI guru Marvin Minsky refers to humans as “meat machines.” To roboticist Rodney Brooks, we’re no more than “a big bag of skin full of biomolecules.” One could fill volumes with these lovely aphorisms from AI’s leading luminaries.

And for the true believers, these are not gloomy descriptions but gleeful mandates. AI’s most strident supporters see it as the next step in our evolution. Our accidental nature will be replaced with design, our frail bodies with immortal software, our marginal minds with intellect of a kind we cannot now comprehend, and our nasty and brutish meat-world with the infinite possibilities of the virtual.

Most critics of heady AI predictions do not see this vision as remotely plausible. But lesser versions might be — and it’s important to ask why many find it so compelling, even if it doesn’t come to pass. Even if “we” would survive in some vague way, this future is one in which the human condition is done away with. This, indeed, seems to be the appeal.

It’s not exactly a boutique idea, either. It’s a fixture of Silicon Valley and its ideological aspirants, institutionalized in the Google- and NASA-backed Singularity University. And it embodies attitudes that, in diluted forms, are much more widespread among academics and futurists.

The author Tom Wolfe, looking at the materialist views of human nature that prevail among many scientists, summarized their conclusion as “Sorry, but your soul just died.” In their view, we’re nothing but flesh and functions with no higher nature. The yearning for AI is in many ways a desire to invert this, to turn the dreary “Sorry” into a triumphant “Hooray!”

There is a dubious historical track record for intellectual movements that mix a despairing view of what we are with a Messianic vision of what we could become. To skeptics, our longing to be replaced by our robot overlords is a form of philosophical immaturity, a false cynicism and self-congratulation that makes it easier for us to justify an unwillingness to confront the problems we face in the world we have now.

Explore these other perspectives:

Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk

Francesca Rossi: Can you teach morality to a machine?

Patrick Lin: We’re building superhuman robots. Will they be heroes, or villains?

Murray Shanahan: Machines may seem intelligent, but it’ll be a while before they actually are

Dileep George: Killer robots? Superintelligence? Let’s not get ahead of ourselves.