The Washington PostDemocracy Dies in Darkness

Why Stephen Hawking believes the next 100 years may be humanity’s toughest test

Stephen Hawking sits on a stage ahead of the announcement of the Stephen Hawking medal for science, "Starmus," on Dec. 16 in London. (Dan Kitwood/Getty Images)

Theoretical physicist Stephen Hawking has made no effort to conceal his fears about the dangers of artificial intelligence, warning on multiple occasions that blindly embracing pioneering technology could trigger humanity's annihilation.

Earlier this month, the 74-year-old Cambridge University professor expounded upon those fears, adding new disasters to his growing list of apocalyptic scenarios that humanity may have to contend with this century, including nuclear war, global warming and genetically engineered viruses, the BBC reported.

According to Hawking, there is, however, a glimmer of hope: Leaving the planet behind.

"Although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or ten thousand years," Hawking told audience members in a public Q&A session ahead of the annual BBC Reith Lectures. "By that time we should have spread out into space, and to other stars, so a disaster on Earth would not mean the end of the human race."

"However, we will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period."

Stephen Hawking just got an artificial intelligence upgrade, but still thinks AI could bring an end to mankind

While Hawking believes technology has the capacity to ensure mankind's survival, previous statements suggest the cosmologist is simultaneously grappling with the potential threat it poses. When it comes to discussing that threat, Hawking is unmistakably blunt.

"I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC in a 2014 interview that touched upon everything from online privacy to his affinity for his robotic-sounding voice.

Despite its current usefulness, he cautioned, further developing A.I. could prove a fatal mistake.

"Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate," Hawking warned in recent months. "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded."

The cosmologist lives with the motor neuron disease amyotrophic lateral sclerosis (ALS), or Lou Gehrig's Disease. As the disease has progressed, he has become almost entirely paralyzed. And in 1985, after contracting pneumonia, Hawking underwent a tracheotomy that left him unable to speak. He is only able to communicate verbally using the assistance of a computer.

Why Donald Trump is now targeting Apple and its ‘damn computers’

While some have called for slowing the pace of innovation, Hawking's latest comments suggest he considers that idea unrealistic, according to the BBC.

"We are not going to stop making progress, or reverse it, so we have to recognize the dangers and control them," he said. "I'm an optimist, and I believe we can."

Despite his focus on some of the most ominous outcomes humanity could face, Hawking said young scientists should not be discouraged by the challenges ahead, the BBC reported.

"From my own perspective, it has been a glorious time to be alive and doing research in theoretical physics," he said. "There is nothing like the Eureka moment of discovering something that no one knew before."


Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’

Highlights from Stephen Hawking’s Reddit AMA: ‘Women’ are the most intriguing ‘mystery’

Apple co-founder on artificial intelligence: ‘The future is scary and very bad for people’

Elon Musk and Stephen Hawking think we should ban killer robots

Researchers create a computer program that learns the way humans do

Stephen Hawking believes he’s solved a huge mystery about black holes