Columnist

A boy looks at “InMoov” robot at La Sapienza University in central Rome (Andreas Solaro/AFP/Getty)

Everything’s coming up robots.

Whether they’re the sylphlike humanoids of “Ex Machina,” the disembodied voice of “Her” or just uncomfortably ethical self-driving cars, the advent of the intelligent machines has never seemed closer. It remains to be seen, however, whether their arrival will bode well or ill.

The term “artificial intelligence” was coined in 1955 by the computer scientist John McCarthy, who defined it as the science of making intelligent machines. Today, the term has shifted slightly to also describe the kinds of intelligence exhibited by machines or software. Currently, our most advanced forms of AI are of “narrow intelligence” — geared toward solving particular tasks, whether playing Jeopardy or searching the Internet.

The next step, however, will be toward a “general intelligence,” sometimes referred to as human-level or strong AI. This would be a machine with intellectual capability equivalent to that of a person — one capable of interacting with its environment, learning and eventually making its own decisions.

From there, based on exponential computing principles such as Moore’s law, the assumption is that such an intelligence could quite easily iterate on itself — applying technology to improve its own intelligence, creating a positive feedback loop that would very quickly lead to superintelligence — described by Oxford philosopher and leading AI thinker Nick Bostrom as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

Computing optimists estimates say a true strong AI will emerge between 2029 and 2040, with almost all AI researchers predicting that the first machine with human-level intelligence will arrive at least before the century’s end. Facebook has already developed face recognition software, and speech-recognition technology is reaching new heights. Earlier this year, Google announced that it had created an algorithm with a human-like ability to learn, a breakthrough in the field of AI.

Although there is still vigorous debate as to how close we actually are to a superintelligent AI (or, indeed, if such a thing is even truly possible), the major worry is that we have no idea what such an intelligence might look like or do. Some, like futurist Ray Kurzweil, would suggest that a human-level or superintelligent AI could help us find solutions to global problems such as climate change, disease, or even our own mortality. Others, like Elon Musk, say that its existence would be an existential risk – an innovation which could lead to our extinction if, for example, our existence got in the way of the AI’s pursuit of its goals, which at a certain point would be beyond our control.

These questions may sound far-fetched, but within the last several years top scientists and industry leaders have warned of the catastrophes that could be unleashed if machines were granted the power to think. In an interview with the BBC last year, Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race.” Bill Gates, Steve Wozniak, Noam Chomsky and others have expressed grave concern about the potential risks of artificial intelligence.

As we continue our progress into an increasingly computerized world, should we be worried about the prospect of superintelligent machines? Should we be preparing? How best can we approach the risks of technology, and how can we predict what’s coming next?

 

Over the next few days, we’ll hear from:

Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University,

Ari Schulman, senior editor at The New Atlantis,

Murray Shanahan, robotics professor at Imperial College London,

Dileep George, co-founder of Vicarious, an artificial intelligence company, and a neuroscience researcher at Redwood Neuroscience Institute,

Francesca Rossi, professor of computer science at the University of Padova and scientific advisor at the Future of Life Institute,

Nick Bostrom, philosopher at the University of Oxford and founding director of the Future of Humanity Institute,

Tessa Lau, Chief Robot Whisperer at Savioke.