I spend an inordinate amount of time thinking about the end of the world. I’m on the catastrophe beat around here. If we all faced certain doom, I know exactly what I’d be doing in my final hours of existence: Trying to file on deadline. (And then tweeting the link.)

Earthquakes, oil spills, hurricanes – that’s just for starters. I’m also tasked with thinking about low-probability, high-consequence events, such as an asteroid impact, or an eruption of the Yellowstone caldera (you know Yellowstone’s a big, bad-ass volcano, right?).

I get paid to worry about the future, and talk to other worriers. Recently, for example, I talked to scientists about “synthetic biology” and whether something cooked up in a lab might escape and run amok. The consensus is: not likely. Nature has already “invented” (if you will) an unbelievable array of organisms that invade every niche and exploit every available resource. There’s a name for a system that invents novel organisms, and it’s called “evolution.” (Here’s my Achenblog post that goes into the topic in greater detail.)

Then there are the very, very exotic hazards. The other day I was working on a story about space aliens, and whether we should beam signals to them, shouting into the void, trying to make contact. A major argument against such a move is that the aliens might be unfriendly, and potentially visit us, lugging along their “To Serve Man” cookbooks (obligatory Twilight Zone reference).

I’m going to mark that down as not-gonna-happen, on account of it being way too much hassle to come all the way here from anywhere else. We live in that kind of universe – you can’t get anywhere because everything’s too spread out. But here’s another crazy thought, mentioned to me by an astronomer: What if we made contact with the space aliens and they informed us, gently, that everything in the universe is just a big computer simulation, and that our lives here on Earth are not actually “real.” But one has to think, immediately, of Samuel Johnson kicking the stone and saying of Bishop Berkeley’s nothing-is-real thesis, “I refute it THUS.” Besides: If a simulated universe is exactly like a real universe then I don’t think anyone has anything to complain about. (Or is that giving up too easily?)

My colleague Matt McFarland recently did a post on the 12 biggest existential threats facing humanity, and you’ll be surprised by what’s number one: Computers. Artificial intelligence. “Skynet” from “The Terminator.”

At some point, computers may begin to program themselves, achieving what is known in the worrying trade as Superintelligence. Is this a rational fear? I will say this: My computer is already a lot smarter than I am. Even my phone is. I didn’t expect to live to the day when I wasn’t as smart as a phone. But I’m with Walter Isaacson, who in “The Innovators” says humans-plus-computers will always be smarter than computers alone.

There are other threats. Nuclear war. That’s not going away anytime soon. You also have the nanotechnology concerns that Matt wrote about: Again, hard to do risk analysis on something we know very little about.

It would be easy to make a list of possible threats, hazards, natural disasters and cosmic catastrophes (have I mentioned solar flares that fry the grid??), and then decide to curl up in a fetal ball and start whimpering. Except go back to Matt’s post: Multiply all those existential threats together and it’s still a longshot. Chances are, the future will be different from the present, just as the present is different from the past — but we’ll probably still be here in some form, and possibly more or less recognizably so.

The biggest issues aren’t existential so much as qualitative. What kind of world will it be? Beautiful? Clean? Marked by freedom and justice? Will children grow up in societies that give them a chance to fulfill their dreams? Will girls be given access to education? Will peace and harmony be the norm, or conflict and war? Utopia, dystopia, or some complicated situation like the one we’re in right now?

I am cautiously optimistic. I believe human beings are highly adaptive creatures. I think we can create a sustainable and beautiful civilization that doesn’t destroy the planet or impose totalitarianism. Optimism, I should note, is not a fashionable attitude in some circles. Indeed there are those who would argue that optimism is counter-productive — an actual impediment to making changes and being adaptive. But optimism isn’t complacency. It has to be paired with a certain level of wariness and a general alertness. We have to find ways to look beyond our immediate horizons. The future isn’t simply something that happens, but rather it’s something that we’ll actively create, through hard choices, clever engineering and social progress. I believe that.

Though we probably ought to fund some new telescopes to look for incoming asteroids, just in case.