(Diego Patiño for The Washington Post)

Asteroids! Solar Storms! Nukes! Climate Calamity! Killer Robots!

A guide to contemporary doomsday scenarios — from the threats you know about to the ones you never think of

A few days before NASA tried to crash a spacecraft into an asteroid as part of what it called the Double Asteroid Redirection Test, I talked to Lindley Johnson, the agency’s planetary defense officer. I think we can all agree that this sounds like an important job.

The planetary defense officer focuses on the detection of dangerous asteroids and comets that might threaten the Earth (as in the movies “Don’t Look Up” and “Armageddon” and “Deep Impact”), and explores technologies for preventing such a thing from happening. This job is not to be confused with the NASA planetary protection officer, who is supposed to keep Earth’s microbes from contaminating other worlds or hypothetical alien microbes from coming to Earth, as in “The Andromeda Strain.”

The Double Asteroid Redirection Test (DART) was conceived as NASA’s first planetary defense mission. A golf-cart-size spacecraft was launched in November 2021 from California. If all went precisely as planned, its 10-month journey would end at precisely 7:14 p.m. Eastern on Monday, Sept. 26, when it would collide with an asteroid named Dimorphos.

The asteroid posed no threat. This mission was just a test of a possible technique of asteroid deflection: a “kinetic impactor.” No hydrogen bombs needed. The collision, if successful, would help refine existing models for what it would take to keep an asteroid from striking Earth.

My question to Johnson: How worried should we be, really, about killer rocks from space? He said a major asteroid impact is rare but potentially catastrophic. He cited the Tunguska event of 1908, when either an asteroid or comet exploded over a remote region of Siberia and flattened 800 square miles of forest. It was, he said, “probably a once-every-200-years or so event, on average. But it’s entirely random. These can impact any time.”

Johnson explained that there are many asteroids lurking out there, still unidentified, that are bigger than the Tunguska object, and they “would devastate a multistate area — a natural disaster of a scale we’ve never had to deal with. That includes all the earthquakes and hurricanes that have ever happened in the past. It could be an existential threat to national well-being — an economic disaster as well as an environmental disaster.” He paused a beat and said, calmly, “So it’s not something you want to happen.”

And here we are at the crux of our existential predicament as a species: There are just so many things we don’t want to happen. There are so many potential doomsdays.

This is not the cheeriest topic, to be sure, but it’s endlessly fascinating if you can stomach it. What are our biggest existential risks? Should we feel more threatened by low-probability but high-consequence risks, such as asteroid impacts and runaway artificial intelligence (robot overlords and whatnot), or should we focus on less exotic, here-and-now threats such as climate change, viral pandemics and weapons of mass destruction? And should we even worry about low-probability risks when hundreds of millions of people right now lack adequate food, water, and shelter and are living off less than $2 a day?

We are not being paranoid when we recognize that human civilization has become increasingly complex and simultaneously armed with techniques for self-destruction. There are bad omens everywhere, and not just the melting glaciers and dying polar bears. We’re all still unnerved by the pandemic. Meanwhile, there’s this ancient threat called war. Vladimir Putin and his advisers keep rattling the nuclear saber. A nuclear holocaust is the classic apocalyptic scenario that never went away.

Not every doomsday scenario is a full-blown extinction event. There are extremely suboptimal futures in which our species straggles onward in a brutish, Hobbesian nightmare — back to the Stone Age. People who think about “existential risk” are focused on the collapse of civilization as we know it. One of their recurring themes is that there has never been a moment as pivotal as this one. “We see a species precariously close to self-destruction, with a future of immense promise hanging in the balance,” declares Oxford University philosopher Toby Ord in his book “The Precipice: Existential Risk and the Future of Humanity.” He gives us a 1 in 6 chance of “existential catastrophe” in the next 100 years.

Ord is part of a new intellectual movement called “longtermism.” Proponents of the long view say we have moral obligations to the welfare of the trillions of people who might potentially follow us here on Earth, and on worlds across the universe. Highest among those obligations, of course, is to avoid destroying ourselves and our planet before those future people are born.

To be transparent here: I skew cautiously optimistic. In theory, I would argue, we should be able to leverage our science and technology — and the evolutionary miracle of our capacity for empathy, kindness and thoughtfulness — to survive and even thrive into the future. But one’s view of human destiny seems to split along generational lines, at least in my circles, where young people have grown up under the cloud of the climate crisis and the failure of leaders to respond adequately to it. They may not find it persuasive when some privileged boomer like me tells them that, sure, we’ve made a total mess of the world and civilization is imperiled, but don’t worry — we’ve got our best people working on it.

This anti-doomsday sales job becomes even harder when we acknowledge that the climate crisis, pandemic viruses and the threat of nuclear war are only a few items on the long list of things that informed people should be fretting about. Optimism may prove delusional — a fatal flaw, in fact. But how you come down on existential risks may pivot on whether you think human ingenuity will outpace human folly. Do you believe, fundamentally, in the human race?

The Johns Hopkins University Applied Physics Laboratory has roots dating to World War II but remains a remarkably low-profile operation. It’s a 24-mile drive southwest of the main Hopkins campus in Baltimore. The mailing address is “Laurel, MD,” but visitors will notice that it’s not anywhere near Laurel, almost as if the address is trying to confuse anyone hoping to find the place. “Below the radar” is how one of the media relations people described APL. It has some 7,000 full-time employees and a campus the size of a small university. It does a lot of classified research with military applications.

The lab handled the DART mission under a NASA contract, and the assigned task was very much in its wheelhouse. It once landed a spacecraft on the asteroid Eros. It put a spacecraft in orbit around Mercury. And in 2015 it managed to fly the New Horizons spacecraft by Pluto, snapping the first close-up images of the dwarf planet. Among its future missions: sending an “octocopter” to explore the surface of Titan, Saturn’s giant moon. Titan’s atmosphere is four times as dense as our own. “If you put on wings,” Ralph Semmel, director of the laboratory, told me, “you could literally flap your wings and fly.”

Semmel spoke with me in his office a few days before the scheduled collision, and the conversation bounced from one existential risk to another. Naturally we talked about the pandemic. “Think about the impact that covid had on the world and the nation. Consider that a body blow. How many body blows like that can the nation or the world sustain before the social fabric of societies begins to crumble?” he asked.

The laboratory has studied four existential risks: asteroids, solar storms, climate change and what he called “biothreats” — which could be anything from a natural pathogen to an engineered bioweapon. The DART mission focused on the first, and although this is exactly the kind of thing at which his engineers are brilliant, there was a real chance of failure. No one had ever knocked a celestial object off course.

Semmel wanted a success not just for the reputation of his laboratory. The world needs it, he said. “We’re just emerging” from the pandemic, he noted. “We’re in a pretty sad place right now from a global and national standpoint. I think some really positive results and news could really bolster folks.” He paused. “We can save life on Earth from extinction. Wouldn’t it be cool to know that?”

I’m actually not all that worried about an asteroid impact. Asteroids are way down at No. 8 on my list of Top 10 Existential Worries (a list I just typed up at the urging of an editor, and which I present simply as a discussion tool):

10. Solar storm or gamma-ray burst.

9. Supervolcano eruption.

8. Asteroid impact.

7. Naturally emergent, or maliciously engineered, pandemic plant pathogen affecting staple crops.

6. Naturally emergent, or maliciously engineered, pandemic human pathogen.

5. Orwellian dystopia. Totalitarianism. Endless war paraded as peace. The human spirit crushed. Not a world you’d want to live in.

4. Cascading technological failures due to cyberattack, reckless development of artificial intelligence and/or some other example of complex systems failing in complex ways.

3. Nuclear war (may jump soon to No. 1).

2. Environmental catastrophe from climate change and other desecrations of the natural world.

1. Threat X. The unknown unknown. Something dreadful but not even imagined. The creature that lives under the bed.

Your apocalypse may vary. Toby Ord, for example, ranks “unaligned artificial intelligence” as the top risk, while another Oxford scholar, Anders Sandberg, puts nuclear war first, followed by an engineered pandemic.

A risk can be described by its probability times its consequence. The probability of significant climate change and other environmental damage is 100 percent, as we can see with our own eyes. The ultimate severity of the consequences at the global level depends on what we do about it. It’s certainly the top existential crisis for those species that are on the verge of going extinct, and for those cities that may run out of sandbags as the seas rise. We’re witnessing a mass extinction event, and we’re the cause. If we can’t solve the climate crisis and protect the environment of our beautiful blue marble, we probably can’t solve any of the other existential threats either. (Please don’t count on escaping to some other world. That is science fiction. Realistically there is no Planet B.)

An asteroid impact, in contrast with the climate crisis, is an example of a low-probability hazard with an unusually wide range of potential consequences. Asteroids are detritus from the formation of the solar system 4.6 billion years ago. Most are far away, orbiting the sun in the asteroid belt between Mars and Jupiter. But some asteroids have orbits that cross the orbital path of Earth.

The impact of a mountain-size rock like the one that struck the Earth 66 million years ago and ended the reign of the dinosaurs would potentially put the final period on the human story. But the probability of such an impact happening is very low; an event like this occurs about once every 100 million years, according to Cathy Plesko, who works on planetary defense at the Los Alamos National Laboratory. Far more likely is an impact from a smaller but still dangerous rock, and exactly what would happen remains a subject of intensive research, Plesko told me. “We’re trying to understand how hard of a punch can we take,” she said.

Scientists and engineers talk about “risk matrices.” One risk matrix used by NASA has green squares and yellow squares and red squares. The dark green square in the lower left part of the matrix is good: low probability combined with low consequence. Dark red in the upper right part of the matrix is bad: high probability, high consequence. Yellow squares are in between.

The problem is, we don’t actually know the hue of many of the risks under discussion. For example, there’s much talk these days about “superintelligence” — some kind of artificial intelligence program that achieves consciousness, escapes human control and runs amok. Humans are enslaved. Or turned into batteries.

A Hollywood fantasy? Artificial intelligence is already filtering through our daily lives, but no program has yet developed the common sense of a human toddler, and self-driving cars still struggle to understand that a snowman by the side of the road isn’t going to try to cross the street.

Humans are remarkably adaptable. Coping with changing circumstances, modeling the future, coming up with strategies and workarounds, mending our ways: This is kind of what we do — our evolutionary niche. But we do not function like a beehive (and by the way, bees are in trouble). We tend to be competitive, selfish, greedy, favoring individual happiness over that of the collective.

Sailing merrily against the prevailing winds of pessimism is Harvard University psychology professor Steven Pinker, probably the world’s most prominent champion of the idea of progress in human affairs. He insists that he’s neither an optimist nor a pessimist and is just laying out the facts — including the positive things that don’t make the front page. “Headlines give you a misleading view of the state of the world because they’re a nonrandom sample of what’s happening. They’re the most sudden, the most lurid, the newsworthy events, so they’re probably going to be bad,” he told me recently. “Human psychology is attuned to the negative. It’s called the negativity bias. We dread bad things more than we savor good things.”

Fear is a protective evolutionary adaptation. We need to be aware of worst-case scenarios. Fear, anger and outrage fuel action, and action lowers risk. But fear can also be exploited by charlatans and demagogues as a sales technique. Let the record reflect that many of the most unnerving doomsday scenarios of the past century have not come true. The “population bomb” that incited apocalyptic predictions in the 1960s did not lead to rising global death rates from famine. Since 1945, nuclear weapons have somehow stayed in their bomber bays, silos and submarines (at least, as this story goes to press). The Y2K computer bug didn’t shatter the economy or cause planes to plunge to Earth.

I recall that one of my elementary school teachers in the late 1960s declared that the world was going to be blown up and destroyed by atomic bombs within five years. She attributed this startling fact to “the experts.” Arguably this was too heavy a thing to lay on kids who were just trying to learn the multiplication table, but that was the spirit of the time.

The world, of course, hasn’t blown up. Still, it’s “100 seconds to midnight,” according to the Doomsday Clock, the metaphor of our vulnerability as determined by the Bulletin of the Atomic Scientists. In this year’s update, the atomic scientists, who have branched out beyond nuclear weapons to incorporate existential risks like artificial intelligence and bioweapons, described humanity’s current position as “doom’s doorstep” and added, “the Clock remains the closest it has ever been to civilization-ending apocalypse because the world remains stuck in an extremely dangerous moment.”

Also worth remembering: There’s no reason to think that multiple existential risks can’t happen simultaneously. Like a supervolcano erupting just when the killer robots announce they’re taking over (good, let them handle it).

The point is, the Menu of Doom is even longer than the one they give you at the Cheesecake Factory. Historian Niall Ferguson, author of “Doom: The Politics of Catastrophe,” told me, “Future historians will find it ironic that we had so many debates about climate change when something else was about to smash us.”

He is not particularly concerned about human extinction — “The human species is incredibly hard to kill off” — but he worries about the danger of “totalitarianism 2.0.” “If history has anything to tell us, it is that totalitarianism is the most dangerous thing that we’ve ever come up with,” Ferguson said. “The most destructive things of the 20th century were the result of totalitarian regimes: Hitler’s and Stalin’s and Mao’s.”

Any such list of potential doomsdays should be written in pencil, since the future is reliably unpredictable. If we were able to identify today what our most pressing problem will definitely be in, say, 50 years, much less a century from now, we’d be the first generation to possess such awesome clairvoyance.

No one in the year 1900 was worried about nuclear war. The idea of splitting the atom for military purposes had not yet entered the minds of even the most visionary scientists because physicists had only the sketchiest understanding of what an atom is, and no inkling that vast energies were bound up in the (still undiscovered) nucleus. Forty-five years later, Hiroshima and Nagasaki were destroyed with massive loss of life.

So you want to avoid the blind-side hit. Don’t assume you know more than you do. Stay nimble. And do look up.

Also worth remembering: There’s no reason to think that multiple existential risks can’t happen simultaneously.

After the past 2½ years, pandemics — once a relative afterthought for most of us — have resumed their historic position as a scourge of humankind. Yet again we find ourselves living through plague years. The good news: Vaccines, antibiotics, antivirals, monoclonal antibodies and genome sequencing have given us tools to fight pathogens. The bad news: The microbes adapt. Antibiotic resistance is on the rise. Evolution is true.

Pandemics may become more frequent as we invade new habitats and intensify interaction with wildlife carrying viruses potentially capable of spilling into the human population. Recently I emailed Ian Lipkin, an epidemiologist at Columbia University, and asked how many animal viruses are lurking out there, yet undiscovered. He answered that he and his colleagues estimate there are at least “320,000 viruses awaiting discovery.”

What about a potentially catastrophic misuse of genetic engineering, including the revolutionary CRISPR gene-editing technique? I posed that question to Jennifer Doudna, the Nobel laureate who co-invented CRISPR and who has been outspoken in warning against misuse of the technology. By email, she pointed out that researchers are using the technology to help humanity on multiple fronts, including health, agriculture and climate strategies. As for existential risks, “currently there are significant technical as well as knowledge barriers to using genetic engineering in ways that could threaten our society at scale.”

What about volcanoes? They somehow get ignored amid the existential risk conversation. When Mount Tambora in present-day Indonesia erupted in 1815 it led to “the year without a summer.” And what about Yellowstone — a “supervolcano”? The national park sits atop a hot spot in the Earth’s mantle and had massive eruptions 2 million, 1.3 million and 630,000 years ago.

The impeccable source on that is Robert Smith, a University of Utah professor emeritus who has studied the Yellowstone volcanic and hydrothermal system for 66 years and is known as Mr. Yellowstone. He assured me Yellowstone is not about to have a catastrophic eruption. Smaller eruptions happen more often than the big, caldera-forming eruptions. He has calculated the probability of a full-blown eruption at Yellowstone at 0.00014 percent per year. “The chances of having a super-eruption in the lifetime of a person is exceedingly low. There are much higher risks,” he said.

Then there are gamma-ray bursts. These are powerful jets of radiation from deep space, produced by exploding or colliding stars, and theoretically a threat to Earthlife. To get a handle on this, I emailed Sara Seager, a professor of physics and planetary science at MIT and a recipient of a MacArthur Foundation “genius” grant. Her response was brief and to the point: “I’m not worried about getting zapped by gamma rays from deep space. The objects that explode are nowhere near us.” I’m with her.

Far more likely, and therefore worrisome, is a dangerous solar storm, particularly of the type called a “coronal mass ejection.” The sun can hurl a massive quantity of magnetically charged particles at Earth, throwing our own protective magnetic field for a loop and potentially disabling the electrical grid for weeks, months or longer.

There’s an expert on this at the Applied Physics Laboratory: James P. Mason, a research scientist and engineer who is the principal investigator for a planned NASA mission that will use a small satellite to scrutinize the solar corona in regions not currently observed. We’ve had some near-misses from ejected solar material, he says. In 2012 a plume missed Earth but zapped a sun-observing scientific spacecraft.

He is dismayed that we haven’t done more as a society to get ready for when the sun gives us trouble. The grid needs backup hardware. We need transformers on standby. “Eventually they will hit. It’s only a matter of time, basically,” Mason says. “It’s a known known.”

Anxiety over existential risks is heightened when we decide our societal leaders can’t be trusted. We do not trust “the experts.” We do not trust the government, the mainstream news media, the corporations, the pharmaceutical industry, Wall Street, the capitalists (or communists), the United Nations, the Gates Foundation or the owners of National Football League franchises.

The problem with all this distrust is that, in a crisis, societies need collective action. This has become harder because we no longer have a “common media culture,” as Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania, puts it. “We now have a media culture where creating distrust of any of those [expert] voices is profitable.”

There is, however, one federal agency that seems to have retained widespread respect for its competence: NASA. The asteroid whackers!

A few days before the DART collision, Nancy Chabot, the DART coordination lead, and engineer Elena Adams walked me through the ingenious concept of the mission. This was called the Double Asteroid Redirection Test because there were two asteroids, a binary pair, involved. Dimorphos, the target, orbited a larger rock, Didymos, every 11 hours and 55 minutes.

By focusing on twin asteroids, the mission simplified a lot of the technical challenges. It would be relatively easy to detect with telescopes any DART-inflicted change in the orbit of Dimorphos around its larger companion. That kept the budget for the mission at a relatively modest $330 million. And Dimorphos was a great target in part because it was the right size. Chabot explained that the major goal of asteroid hunters is to identify the ones between 140 meters (460 feet) and 1 kilometer (0.6 miles) in diameter. Dimorphos was estimated to be 160 meters (525 feet) across.

There were uncertainties about this rock, though. Chabot and Adams had been working on the mission for years but still didn’t know what Dimorphos looked like. They also couldn’t be entirely sure that DART would hit the target. If DART missed, it would keep going around the sun; in theory it could get another shot at ramming into Dimorphos in about two years. But the DART team wouldn’t even discuss that. Success was the only acceptable result. Close wouldn’t count.

By late Monday afternoon, the 26th, the Applied Physics Laboratory was abuzz. Reporters on the space beat were stationed in a building nowhere near the mission operations center, but NASA and laboratory officials circulated through to brief us on the progress of the spacecraft as it neared the asteroid. “The level of certainty is not 100 percent on these missions,” Thomas Zurbuchen, the associate administrator for science at NASA, told us. “We cannot talk our way into it.”

Robert Braun, head of space exploration missions for the laboratory, floated one theoretically possible but unlikely scenario: “If we were right on course and it was shaped like a doughnut, we’d fly right through it.”

As day turned to dusk, the show was on. About an hour before impact, Dimorphos appeared as a tiny, barely perceptible dot near the much brighter asteroid. The dot got bigger very slowly. Only in the last few minutes did we all get a good look at the target: a gray, harsh, lifeless rock pile. By this point everyone in the mission operations center was standing. Dimorphos grew larger in the frame. Closer, closer … and one final image. Then nothing. A blank screen. Loss of signal. Spacecraft destroyed. Success!

Forty-five minutes later the scientists and engineers held their news conference, giddy with excitement. A TV reporter asked: Should earthlings sleep better now? Adams, the engineer, offered the desired sound bite: “I think that earthlings should sleep better. Definitely I will.”

NASA held a sparsely attended but live-streamed news conference Oct. 11 at agency headquarters in Washington and revealed what scientists had learned about the collision. DART, Chabot said, delivered a powerful punch, at the high end of what had been expected. NASA had defined mission “success” as altering Dimorphos’s orbit by at least 73 seconds. But the rock’s orbit around its larger companion was shortened by a full 32 minutes.

The scientists at the briefing did not claim to have saved the world. They understand the work that remains to be done. They need to find more asteroids and chart their orbits. They need to study other potential asteroid-deflection technologies. And at some point a system has to be put in place.

But even though scientists are not prone to bluster, the head of NASA, Bill Nelson, the 80-year-old former U.S. senator with a stentorian voice, did not hold back. “We conducted humanity’s first planetary defense test and we showed the world NASA is serious as a defender of this planet,” he said. A few moments later, he added, “This mission shows that NASA is trying to be ready for whatever the universe throws at us.”

Conceivably I could have peppered Nelson with a series of cynical and annoying questions. Is this alleged technology really practical? What if you had a big rock coming in fast — how many golf carts would you have to fling at it? How can you keep the rock-deflection system operational for centuries even if the global economy has collapsed? And, by the way, what can NASA do about solar storms, gamma-ray bursts, rogue black holes or other things the universe might throw at us?

No one asked anything like that. Sometimes you just celebrate the win — and get ready to fight another doomsday.

Joel Achenbach writes about science and health for The Washington Post.