In a survival of the fittest contest in which humans and robots start at zero (which is what we’re really talking about with a mass extinction event), robots would win every time. That’s because humans evolve linearly, while superintelligent robots would evolve exponentially. Simple math.
Think about it — robots don’t need water and they don’t need food — all they need is a power source and a way to constantly refine the algorithms they use to make sense of the world around them. If they figure out how to stay powered up after severe and irreversible climate change impacts – perhaps by powering up with solar power as they did in the Hollywood film “Transcendence” — robots could quickly prove to be “fitter” than humans in responding to any mass extinction event.
Yet, it’s impossible to think of a mass extinction event impacting the earth without thinking about who (or what) might actually survive. That’s what suggests framing the research paper on the link between mass extinctions and robotic evolution as a way to understand the potential confluence of two points in the future. 2050 happens to be the year the UN foresees potential devastating impacts from climate change. And 2050 also happens to be the approximate hit point for the rise of superintelligence.
Of course, a lot has to go seriously wrong before we can seriously talk about robots taking over from humans.
First of all, there has to be the type of climate change event in 2050 that many — including the UN — are warning about.
Finally, robots have to become highly intelligent, if not superintelligent by the year 2050. You’re not going to have a Roomba or Asimo taking over the earth anytime soon. There’s going to have to be a massive acceleration of AI between now and then.
But, still, the scenario of a robot uprising brought on by massive global climate change is thought provoking, if not a bit unsettling: the year projected for a massive global climate change event — 2050 — is also approximately the same date set by fans of the Singularity for the emergence of superintelligence. In short, at the exact moment that humanity has hit a point of no return, the robots would be just about hitting their stride.
That raises a lot of perplexing questions about the future of AI that need to be answered sooner rather than later. If humanity is smart, it will start looking to AI to help prevent the climate-related risks rather than being blindsided by AI after the abrupt and irreversible climate changes. It’s only a small example, but China is now partnering with IBM to examine how AI can help prevent air pollution. Add up enough of these AI initiatives, and it may be enough to push back the doomsday clock on a mass extinction.
New studies on evolutionary robotics may also encourage people to start asking the types of big-picture questions about AI that only a small group of people on the planet may be capable of answering right now. That involves bringing in the likes of Gates and Hawking and Musk (all big AI skeptics) and mapping out a future in which man and machine can co-exist. As they suggest, we have to become smarter about the way that we develop the future of artificial intelligence.
And, finally, it means that we should think carefully about the way we co-evolve alongside AI up until the year 2050. Maybe a man-machine hybrid is not so crazy after all. If we’re not careful, superintelligent robots could decide that helping out humans with our constant demands is just not worth their time and effort. In fact, it would make for a great plot line for a Hollywood dystopian film a few years from now — robots begin to warm up the planet on purpose in order to accelerate a mass extinction, so that they can take over from humans even sooner than expected.