John Underkoffler, who was the technology adviser to “Minority Report,” thinks we’re overreacting to the threat of artificial intelligence. (David James/Twentieth Century Fox)

Just how worried should we be about artificial intelligence? Earlier this year the Global Challenges Foundation released a thorough report on the greatest threats to human civilization. While the odds of most things were minuscule — about a hundredth of a percent — artificial intelligence was in a class of its own, given a zero to 10 percent chance.

In the past year, leading technology thinkers such as Elon Musk, Bill Gates and Stephen Hawking have warned of the perils of artificial intelligence. Last week the Future of Life Institute gave out 37 grants to ensure that artificial intelligence remains beneficial. The fear is that some super-intelligent being will shape the world to its preferences, and humans might be expendable as it pursues its goals.

Of course, the debate remains theoretical and full of what-ifs. In past decades, researchers have suggested we’re on the precipice of massive breakthroughs in artificial intelligence, only to see those predictions fall flat. The future is notoriously hard to predict.

Nick Bostrom, author of the well-regarded book on the dangers of artificial intelligence, “Superintelligence,” admits in its opening pages that “many of the points made in this book are probably wrong.” He warns that some of his conclusions could be invalidated because of key considerations he didn’t make.

[What the debacle of climate change can teach us about the dangers of artificial intelligence]

Meanwhile fears and talk of killer robots run through pop culture. In past four months, three movies about artificial intelligence — “Ex Machina,” “Chappie” and “Terminator Genisys” — have arrived in theaters.


Suggested Google searches are a window into what we’re searching for and thinking. (Screenshot)

But not everyone is buying it. One skeptic is John Underkoffler, who was the technology adviser to “Minority Report,” the Steven Spielberg film in which an intelligent system predicts who will commit future crimes. Underkoffler’s work at the MIT Media Lab inspired the memorable hand gestures and interface that Tom Cruise’s character used to run the system.

“I’m actually really bemused by this sudden furor over the dangers of AI,” Underkoffler told me. “It’s a pretty simple reaction. We don’t have AI and we’re nowhere close to it.”

Most in the artificial intelligence community expect we’re decades away from the powerful type of artificial intelligence that could prove troublesome. While Google’s DeepMind and IBM’s Watson system have shown progress in the field, they still have plenty of limitations.

“For something to suddenly become sentient and Skynet, or Proteus or any of these other sci-fi things to suddenly to become malevolent and have enough resources at its disposal to go stomping around and squashing us — the thing is, it doesn’t emerge overnight,” Underkoffler said. “Wouldn’t you notice that stuff was getting smart?”

After “Minority Report,” Underkoffler went on to found Oblong Industries, which brings some of the gesture and interface technology that Cruise used in “Minority Report” to companies today. Underkoffler is creating a collaborative environment where employees in different cities can interact on shared surfaces. He wants to move past the current model where one person works on one computer.

Underkoffler sees artificial intelligence as something that should be researched and developed for its potential to improve lives today, not potentially destroy them years from now.

“For billionaires to be donating millions of dollars to foundations to worry about making sure that AI doesn’t get away seems analogous to me to saying ‘I’m going to donate $15 million right now to an institute to make sure that teleportation doesn’t enable thieves to grab my wallet,’ ” Underkoffler said. “Okay, yeah, I don’t want that to happen, but wouldn’t we have to have teleportation first? I’d rather have my $15 million go to inventing teleportation.”

He envisions that the development of the potentially dangerous artificial systems will mirror biological evolution. A lone researcher at an office supply company is unlikely to suddenly hatch an algorithm that endlessly replicates itself, grabs control of human systems like the Internet and covers most of the earth’s surface with paper clips.

“You would get an AI that would basically do what a marmot does first. And then maybe a really smart crow,” Underkoffler said. “And then you might get to a monkey or something and eventually you get a dolphin or human and beyond.”

When I spoke with Underkoffler, he did caution about what he calls nuisance artificial intelligence. If the world’s electric grids are turned over to a machine-learning system, a bug or something might cause the system to shut down in a way that we can never get it operational again.

“It’s not impossible to imagine a world-changing event like that,” Underkoffler said. “And it’s definitely worth building checks and balances. You would do that anyway. If you’re building a roller-coaster, you build multiple layers of checking and fail-safes and so forth.”