Left: Elon Musk. (Simon Dawson/Bloomberg) Right: Stephen Hawking. (Andrew Cowie/AFP/Getty Images)

It’s a scenario that’s been outlined in countless science fiction films such as “The Terminator,” “The Matrix” and “I, Robot”: Machines defy their programming, kill humans and take over the world.

Now, some of the nation’s leading futurists — including Tesla chief executive Elon Musk and folks from Google — have put their digital John Hancocks to virtual paper, identifying ways to avoid the end of the world.

Or, at least, that’s what it seems like the signatories are trying to do. The letter, put forth by the nonprofit Future of Life Institute — “a volunteer-run research and outreach organization working to mitigate existential risks facing humanity” — doesn’t commit anyone to anything, and is quite a task to read. First, take a gander at the letter’s definition of AI.

“‘Intelligence’ is related to statistical and economic notions of rationality — colloquially, the ability to make good decisions, plans, or inferences,” the letter, called “Research Priorities for Robust and Beneficial Artificial Intelligence,” read. “The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.”

As Neo might put it: “Whoa.” The letter is unspecific about the risks some theorists see in a world where machines are ascendant, including killer drones, mass unemployment, mass starvation and gray goo. Indeed, the letter, designed to bring attention to the dangers of artificial intelligence, barely manages to articulate them.

Instead, it focuses on the upside.

The potential benefits [of AI] are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable,” its clearest paragraph reads. “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

Only in an attached document called “Research priorities for robust and beneficial artificial intelligence” does the Future of Life hint at what the “pitfalls” of AI could be.

  • If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits. In what legal framework can the safety benefits of autonomous vehicles such as drone aircraft and selfdriving cars best be realized?
  • Can lethal autonomous weapons be made to comply with humanitarian law?
  • How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, etc., interact with the right to privacy?
  • How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost?

Signatories include not just Musk and researcher Stephen Hawking, but representatives of IBM, Harvard and Massachusetts Institute of Technology professors, and the co-founders of DeepMind, the AI company Google bought last year.

If, as Musk has said, artificial intelligence is a “demon” that is “potentially more dangerous than nuclear weapons,” the Future of Life letter is like a SALT treaty with no strategy arms limitations. But some said that it’s at least a start.

“The long-term plan is to stop treating fictional dystopias as pure fantasy and to begin readily addressing the possibility that intelligence greater than our own could one day begin acting against its programming,” CNET wrote.

RELATED:

Why Elon Musk is scared of artificial intelligence — and Terminators