The Hollywood film “Transcendence” is just the latest in a long line of dystopian films in which artificial intelligence runs amok, resulting in a post-apocalyptic future for humanity. In “Transcendence,” the only way to stop a super-intelligent computer from taking over the world with cyborg zombies is to shut down the global grid, thereby throwing the world into a post-apocalyptic era where computer keyboards are only used as doorstops and the Internet no longer exists.
The film — filled with its dystopian overtones — has elicited such strong opinions about the future of artificial intelligence that none other than Stephen Hawking has weighed in on the matter. In a much-debated op-ed for The Independent written in collaboration with other leading scientists, Hawking suggests that, “Success in creating AI would be the biggest event in human history.” “Unfortunately,” warns Hawking, “it might also be the last, unless we learn how to avoid the risks.” At the same time, Hawking admits that, if humanity can deal with the explosive growth in intelligence it promises, AI could lead to the eradication of war, disease, and poverty.
So, which is it? Will AI lead to a dystopian or a utopian future?
While the immutable logic of Hollywood these days requires any blockbuster film to have a scenario in which the hero must save the world from imminent disaster, that doesn’t mean that there aren’t other outcomes possible for artificial intelligence. In fact, there are many scenarios that are “utopian,” rather than “dystopian,” in outlook. You could argue that the base case for the Singularity, as famously presented by Ray Kurzweil nearly a decade ago and now being carried out by Singularity University, actually offers a remarkably upbeat scenario for the future of AI. Every new technology is a new exponential technology capable of radically changing the world for good. In “Transcendence,” these technologies are represented in the form of nanotechnology, quantum computing, and the Internet of Things.
But that’s just the base case – others, such as Ted Chu (an economist at NYU Abu Dhabi), have painted even more optimistic scenarios. In Human Purpose and Transhuman Potential, Chu lays out the framework for Cosmic Beings and “a new, heroic cosmic faith for the post-human era.” These new Cosmic Beings would essentially take over from humans, becoming a strange but wonderful amalgam of human DNA and machine intelligence. They would transcend the barriers of human biology, essentially becoming immortal, and would go on to colonize the stars.
That sounds too, well, cosmic, right?
Well, it’s exactly the type of scenario laid out by Hawking five years ago in his celebrated “Life in the Universe” lecture. In it, Hawking noted that the human species has entered a new phase of evolution. For the first time in Earth’s history, a form of life — “humans” — has the ability to control its evolutionary destiny as the result of technology. To put it simply, humans are evolving differently now. And it’s going to be big — it’s a fundamentally new phase, like moving from unicellular to multi-cellular organisms, or from multi-cellular organisms to fish, reptiles and mammals.
That’s the reason why dystopian filmmakers get it wrong — they set up an unnatural tension between biological evolution and technological evolution. In “Transcendence,” for example, the anti-AI activists call for “evolution without technology.” What some, including Hawking, have pointed out, is that this perspective is necessarily small-minded. “We are more than just our genes” — we are all the accumulated information and knowledge of the past 10,000 years. Genes are just one kind of information, language and books are another. Our human brains haven’t evolved fast enough to keep up, so we essentially invented technology to help us.
That’s what makes the future so exciting — we will soon have the ability to learn from the accumulated knowledge of humanity, just as in “Transcendence,” where one computer can tap into all information ever created. In a world where information on the Internet is exploding at an exponential rate, we are all limited to being experts in a very limited domain. With new breakthroughs in AI, though, we would have at our disposal nearly unlimited computing power to unlock the world’s mysteries — such as how to cure disease or reverse the aging process.
What’s important to keep in mind, as Hawking points out in his AI op-ed, is that the development of artificial intelligence — like any evolutionary process — is extremely path-dependent. (And, as if to underscore this fact, “Transcendence” features a TED-style conference called ‘Evolve the Change’ in which Johnny Depp makes the case for our AI future.). In other words, the path that we take to the Singularity matters. Each successive step essentially locks in one set of future outcomes.
The only problem is that it’s impossible to tell at the outset which path we’ll land on and follow. Make any mistakes along one path, and you get one outcome, head along another path, and you get another outcome. One is utopian — the end of disease, the eradication of poverty and super-intelligence; the other is dystopian — robot overlords and the end of humanity as we know it. The film “Transcendence” hints at the ability to choose these possible paths, but then veers off-course by suggesting that the future is actually path-independent: We’re stuck with a dystopian future no matter what we do with computers.
It’s clear that something momentous is about to happen, a time when the boundary between bits and atoms disappears, when the line blurs between humans and machines. Humanity has the potential to create the type of future it wants, but we can’t wait until it magically appears. The Singularity is not some point in time that magically happens in the year 2045 — it’s a slow grind that evolves over decades. The good news is that we can impact the path of its evolution. If we do it right, it will be time to toss out the post-apocalypse as a scenario for the future and come up with something a bit more hopeful and optimistic.