Ever since Elon Musk started making his comments about the risks posed by artificial intelligence, we’ve been deluged with stories about the “existential threat” posed to humanity by AI run amok. It’s gotten to the point where artificial intelligence is viewed as the Doomsday Machine that will result in the downfall of humanity.

And it’s not just AI – just about any new innovation is ripe for the tech dystopia treatment. Particle physicists gave us a scare for a while when they searched for the so-called “God Particle” – people thought we were going to blow up the earth. Robots and drones also make for compelling tech dystopia story lines – it’s far easier to imagine what might go wrong rather than what might go right.

And now there’s a new tech scare: CRISPR, the hot new gene-editing technology. Last week, in answering a question on Quora, legendary Silicon Valley venture capitalist Vinod Khosla said that CRISPR has the potential to be a “scarier” technology than AI: “I can think of more scary technologies [than AI] that we’re using today – CRISPR being one in biology – which has equally scary potential.”

Cue the scary background music.

Of course, the tech panic pieces about CRISPR are not entirely misplaced — there are obviously concerns. Hacking the human gene code could be truly and deeply hazardous. The fact that Chinese researchers are seriously thinking about cloning humans or editing the genes of new babies should give everyone pause for thought.

Since most of us don’t have PhDs in machine learning or molecular biology, though, it’s easy to get lost in the science and assume that things are further along then they are. Robots can barely get out of their own way, let alone mount a robot uprising. The smartest machines can barely figure out how to play a decent game of Tetris.

Even Khosla admits that AI is “probably more manageable than [Elon Musk] might intimate.” “We don’t know if we could get there in 20, 30, or 50 years, but it’s unlikely to be in 5,” he said on Quora.

So what’s behind all these stories about existential threats to humanity, then?

The not-so-obvious answer is that it has less to do with technology, and more to do with what makes us human. You can call it the real innovator’s dilemma – the desire to go farther vs. the fear of going too far. That’s essentially the story line of any Hollywood dystopian plot – scientists go too far, the technology gets out of control, and the end of the world is suddenly nigh.

And this tech dystopia vs. tech utopia debate is not something that started in a place such as Silicon Valley – it’s something that’s been going on since the days of the ancient Greeks. Consider the classic Greek myth involving Icarus — the young lad who dared to fly too close to the sun on wings made of feathers and wax.

The ancient Greeks understood something that people today don’t — that we as a society need to have these sorts of myths to keep us from going too far, from attempting too much, too soon. These myths are not meant to stop technological progress – they are meant as a way to inspire debate about the perils of human hubris, as well as the philosophical, moral and ethical concerns surrounding human progress.

The modern myth makers are the moviemakers of Hollywood, who are only too ready to develop story lines about the “scariest” technologies of Silicon Valley. You can think of today’s tech dystopian films featuring AI or biotechnology run amok as the modern equivalent of the Greek myths.

Joseph Campbell famously analyzed all the elements of classic mythology and came up with the conclusion that every culture, every society, comes up with the same basic narratives for their myths. That’s why so many of Hollywood’s blockbuster films appear to be so similar – they are just modern iterations of timeless tales. The only thing that changes is the technology. Not convinced? Watch Joseph Campbell break down “Star Wars” from the perspective of mythology:

Think of all the common elements of a tech dystopia story in the media — there’s an evil genius (the younger the better); scientists doing secretive stuff in labs that sounds incomprehensible to the layperson; references back to awful periods in human history (think Nazi Germany); and, of course, the possibility for destroying the earth. More importantly, the time horizon for things going wrong is usually just far enough away that it’s plausible and just close enough that it seems scary.

The New Yorker, for example, recently ran back-to-back stories on “The Gene Hackers” (about CRISPR) and then on “The Doomsday Invention” (about AI). The article on CRISPR even included the requisite reference back to Hitler Germany and the dangers of eugenics.

However, there are plenty of technologies “scarier” than AI or CRISPR.

Nuclear weapons would surely rank right up there as a potential way for humanity to destroy itself, especially if nukes get into the wrong hands. There is probably a lot more room for disaster than if AI or CRISPR gets into the wrong hands.

Looking for another “scary” technology – how about carbon fuel technology? We’re literally warming the surface of the planet to a point where global climate change may lead to extinction of life forms on the planet Earth, including, yes, humans.

Yes, a dystopian future is possible, but so is a utopian future. Most likely, the answer is somewhere in the middle, the way it’s been for millennia. Ever since mankind started to innovate, there have been both good and bad possible outcomes anytime we try to fly too close to the sun.