A T-800 Terminator in a scene from “Terminator Salvation,” a Warner Bros. Pictures release. (Courtesy of Richard Foreman)

Each week, In Theory takes on a big idea in the news and explores it from a range of perspectives. This week we’re talking about robot intelligence. Need a primer? Catch up here.

Imagine an urn filled with marbles, each representing a human discovery. For each marble taken out, humankind gets slightly better off, but one marble — a black marble — has the potential to destroy civilization.

Nick Bostrom, a philosopher at the University of Oxford and director of the Future of Humanity Institute, often uses this metaphor to describe global catastrophic risks, or the events that could cripple or even destroy humanity. Artificial intelligence, as Bostrom argues in his book “Superintelligence,” may be the black marble.

Bostrom has been at the core of a movement urging caution against the rising prospects of true artificial intelligence. We asked him a few questions.

This interview has been lightly edited. 

Why should we be concerned about artificial intelligence?

I think in the long term, artificial intelligence will be a big deal — perhaps the most consequential thing humanity ever does. Superintelligence is the last invention we will ever need to make. It would then be much better at doing the inventing than we are.

So it would make sense to focus some amount of serious attention on it, in case there are things we can do in advance that would improve the odds that the transition to the machine intelligence era goes well. For example, we don’t know yet how hard it is to engineer the goal system of an artificial agent to align  with human values and intentions.

There is research in computer science one could do now, to begin to explore this issue in simple mathematical model systems. This would increase the probability that we will have a solution to the control problem by the time it is needed. We don’t want to find ourselves in a situation a few decades hence where we know how to create superintelligent AI, but haven’t figured out how to control it or make it safe.

[Robot intelligence: A primer]

You’ve talked quite a bit about global cooperation on this technology, but what would that cooperation look like?

I don’t know yet what concrete forms it might take. Perhaps one could start by encouraging actors in this area to commit to the idea that the development of machine superintelligence, if it ever comes about, should be done for the greater public good, in accordance with widely shared ethical ideals, and with due attention to safety and long-term impacts.

Who are the biggest actors?

In terms of basic research, most of the action in machine intelligence is in academia and industry, with a notable shift towards the latter over the last few years. The largest Silicon Valley tech companies naturally have an interest in this area and are making significant investments. The work on safety, control (as well as ethics and policy implications) is at an earlier stage — a loose-knit network of researchers in machine learning and neighboring fields, so far predominantly based in the United States and the United Kingdom.

Are there other scientific fields outside of AI that could serve as a model for approaching potential risks of AI?

Probably. There are lessons to be learned from here and there, but we haven’t done this research yet — so we don’t know what we will find or where. It seems reasonable to take a peek at other important tech areas and see if there are lessons we can learn, and that is one of the things we plan to do.

The Future of Humanity Institute is in the process of starting a Strategic AI Research Center (also mainly based at Oxford) which will focus on investigating policy issues related to future advances in artificial intelligence. A more systemic survey is one (small) thing we plan to do in the next couple of months.

But general machine intelligence has some unique properties that may require one to approach the topic more from first principles. In any case, it is not as if human civilization is all that sophisticated in the way it approaches other long-term technological prospects either. We mainly just develop a bunch of things that seem cool or profitable or useful in war and hope that the long term consequences will be good.

You’ve written in favor of human enhancement which includes everything from genetic engineering to “mind-uploadingto curb the risks AI might bring. How should we balance the risks of human enhancement and artificial intelligence?

I don’t think human enhancement should be evaluated solely in terms of how it might influence the AI development trajectory. But it is interesting to think about how different technologies and capabilities could interact. For example, humanity might eventually be able to reach a high level of technology and scientific understanding without cognitive enhancement, but with cognitive enhancement we could get there sooner.

And the character of our progress might also be different if we were smarter: less like that of a billion monkeys hammering away furiously at a billion typewriters until something usable appears by chance, and more like the work of insight and purpose. This might increase the odds that certain hazards would be foreseen and avoided. If machine superintelligence is to be built, one may wish the folks building it to be as competent as possible.

 

Explore these other perspectives:

Dileep George: Killer robots? Superintelligence? Let’s not get ahead of ourselves.

Patrick Lin: We’re building superhuman robots. Will they be heroes, or villains?

Ari N. Schulman: Do we love robots because we hate ourselves?

Murray Shanahan: Machines may seem intelligent, but it’ll be a while before they actually are

Francesca Rossi: How do you teach a machine to be moral?