If a climate change apocalypse ever leads to war, famine and disease, robots will be positioned to thrive, according to a new paper. Pictured here are traffic robots in Kinshasa, in the Democratic Republic of Congo. (Federico Scoppa/AFP/Getty Images)

We’ve already heard of all the nasty consequences that could occur if the pace of global climate change doesn’t abate by the year 2050 — we could see wars over water, massive food scarcity, and the extinction of once populous species. Now add to the mix a potentially new wrinkle on the abrupt and irreversible changes – superintelligent robots would be just about ready to take over from humanity in the event of any mass extinction event impacting the planet.

In fact, according to a mind-blowing research paper published in mid-August by computer science researchers Joel Lehman and Risto Miikkulainen, robots would quickly evolve in the event of any mass extinction (defined as the loss of at least 75 percent of the species on the planet), something that’s already happened five times before in the past.

In a survival of the fittest contest in which humans and robots start at zero (which is what we’re really talking about with a mass extinction event), robots would win every time. That’s because humans evolve linearly, while superintelligent robots would evolve exponentially. Simple math.

Think about it — robots don’t need water and they don’t need food — all they need is a power source and a way to constantly refine the algorithms they use to make sense of the world around them. If they figure out how to stay powered up after severe and irreversible climate change impacts – perhaps by powering up with solar power as they did in the Hollywood film “Transcendence” — robots could quickly prove to be “fitter” than humans in responding to any mass extinction event.

There are some important caveats to keep in mind before drawing any conclusions. First of all, while the two computer science researchers focused on evolution by robots after a mass extinction, they didn’t specify that it was a result of climate change. And, secondly, the researchers didn’t suggest that robots would eventually vie with humans for control of the earth, or offer any side-by-side comparison of how humans and robots would evolve.

Yet, it’s impossible to think of a mass extinction event impacting the earth without thinking about who (or what) might actually survive. That’s what suggests framing the research paper on the link between mass extinctions and robotic evolution as a way to understand the potential confluence of two points in the future. 2050 happens to be the year the UN foresees potential devastating impacts from climate change. And 2050 also happens to be the approximate hit point for the rise of superintelligence.

Of these two 2050 dates, it’s probably the superintelligence hit point that’s most up for grabs. While some AI experts have put firm timelines on when the so-called Singularity might occur — most notably, Ray Kurzweil, who tagged it as 2045 in his book “The Singularity is Near” and Vernor Vinge, who predicted that it might happen as soon as 2023 — the consensus estimate appears to be a moving target — “15 to 25 years from now.” The median estimate of nearly 100 superintelligence predictions, as calculated by Stuart Armstrong at the Singularity Summit in 2012, was the year 2040, which is — you guessed it — within the range of “15 to 25 years from now.”

In their paper, what the computer science researchers did was attempt to see how mass extinctions impact evolution and overall evolvability. In order to do that, they came up with a clever way to measure evolution in robots using insights from previous work on neural networks, genetic algorithms and the field of evolutionary robotics, which applies Darwinian-like principles of natural selection to the design of intelligent robots.

A U.N. report released Monday in Japan said global warming is affecting food and water shortages, economic livelihoods and raising the risk of wars. (Reuters)

Of course, a lot has to go seriously wrong before we can seriously talk about robots taking over from humans.

First of all, there has to be the type of climate change event in 2050 that many — including the UN — are warning about.

Second, that climate change has to cause the type of destruction that the UN has been warning about, leading to a mass extinction. (Not so far-fetched, given the fact that some are already saying that we’re already in the midst of a “sixth extinction.”)

Finally, robots have to become highly intelligent, if not superintelligent by the year 2050. You’re not going to have a Roomba or Asimo taking over the earth anytime soon. There’s going to have to be a massive acceleration of AI between now and then.

But, still, the scenario of a robot uprising brought on by massive global climate change is thought provoking, if not a bit unsettling: the year projected for a massive global climate change event — 2050 — is also approximately the same date set by fans of the Singularity for the emergence of superintelligence. In short, at the exact moment that humanity has hit a point of no return, the robots would be just about hitting their stride.

That raises a lot of perplexing questions about the future of AI that need to be answered sooner rather than later. If humanity is smart, it will start looking to AI to help prevent the climate-related risks rather than being blindsided by AI after the abrupt and irreversible climate changes. It’s only a small example, but China is now partnering with IBM to examine how AI can help prevent air pollution. Add up enough of these AI initiatives, and it may be enough to push back the doomsday clock on a mass extinction.

New studies on evolutionary robotics may also encourage people to start asking the types of big-picture questions about AI that only a small group of people on the planet may be capable of answering right now. That involves bringing in the likes of Gates and Hawking and Musk (all big AI skeptics) and mapping out a future in which man and machine can co-exist. As they suggest, we have to become smarter about the way that we develop the future of artificial intelligence.

And, finally, it means that we should think carefully about the way we co-evolve alongside AI up until the year 2050. Maybe a man-machine hybrid is not so crazy after all. If we’re not careful, superintelligent robots could decide that helping out humans with our constant demands is just not worth their time and effort. In fact, it would make for a great plot line for a Hollywood dystopian film a few years from now — robots begin to warm up the planet on purpose in order to accelerate a mass extinction, so that they can take over from humans even sooner than expected.