“If you went back to the late 19th century and said to people look this development of the internal combustion engine and coal fire electrical generation. This combination is going to lead to something that in a few year’s time we’re going to call global warming and you’ll be sorry. So maybe you should start thinking about how to prevent a rise in carbon dioxide and how to generate alternative methods such as solar and wind power,” Russell said. “If we had started in the late 19th century I think we would’ve had a chance at preventing it.”
The Industrial Revolution brought a lot of good, but also created the problem of climate change. Experts have warned we won’t be able to reverse its effects. So what huge problems will the Digital Revolution bring, and can we stop them?
Artificial intelligence has become a hot field with the interest of top tech companies such as Google and Facebook, which are hoarding top talent. DeepMind, a London start-up Google purchased last year, has shown remarkable progress, developing an algorithm that can teach itself to beat Atari video games. The next challenge is racing games from the 1990s, then one day maybe it can teach itself to drive our cars.
Facebook chief executive Mark Zuckerberg said Tuesday in an online chat that Facebook’s “goal is to build [artificial intelligence] systems that are better than humans at our primary senses: vision, listening, etc.”
The question is where does all of this lead us?
Zuckerberg and Google executives are quick to emphasize the positives. An artificial intelligence system could drive a blind person’s car, increasing their mobility. A system could scan a blind person’s Facebook Newsfeed and describe photos to them.
Yes, those are great examples of positives uses of technology. Of course, inventors of technologies tend to be much better at identifying the upsides of their creations, and don’t dwell on the negatives.
Researchers at the Global Challenges Foundation called artificial intelligence one of the greatest threats to humanity, with a zero-to-10 percent chance of wiping out human civilization. We’ve heard warnings from Elon Musk, Stephen Hawking and Bill Gates.
Georgia Tech professor Ronald Arkin, who also spoke at Tuesday’s event, warned against trusting engineers to protect us from the hazards of artificial intelligence.
“It took me a long time, years — decades perhaps — to realize that,” Arkin said. “Not all our colleagues are concerned with safety. The important thing is you can not leave this up to the AI researchers. You can not leave this up to the roboticists. We are an arrogant crew, and we think we know what’s best and the right way to do it, but we need help.”
One valuable aspect of Russell’s perspective is his ability to simplify the subject matter in a way that most of us can understand. When we hear about self-replicating bots learning at a Moore’s Law pace, we may be hearing an accurate description of one of mankind’s greatest risks. But given that most people aren’t familiar with terms such as algorithms and Moore’s Law, the warning might as well be told to us in Greek. And if no one understands the next climate change, there will be little motivation to work to prevent it.
On Tuesday, Russell brought up King Midas, the character in Greek mythology who thought it would be wonderful if everything he touched turn to gold. It sounds like a great superpower before he had it. After the fact he was full of regret — but there was no going back — and that’s the risk with artificial intelligence, according to Russell.
Artificial intelligence systems will be programmed to carry out goals, and with levels of intelligence that exceed humans, they’ll likely act to prevent anyone from shutting them down. Because they have capabilities that exceed ours, it will be extremely difficult to defeat them.
“It’s already outthought you,” Russell noted. “The system has spread itself out onto the Web. It now exists in tens of millions of copies in hundreds of millions of machines and it’s already out-thought you. So it’s not easy to shut it down.”
Russell said that research currently isn’t being done to prevent such situations, but that he’s reasonably optimistic that things will work out. He pointed to interest from the Defense Department and National Science Foundation to fund such research, and the increased warnings coming from observers.