The Washington PostDemocracy Dies in Darkness

Why Google’s nightmare AI is putting demon puppies everywhere

A few weeks ago, Google researchers announced that they had peered inside the mind of an artificial intelligence program.

What they discovered was a demonic hellscape. You’ve seen the pictures.

[Bonus: We ran the entire GOP presidential field through Google’s DeepDream program.]

These are hallucinations produced by a cluster of simulated neurons trained to identify objects in a picture. The researchers wanted to better understand how the neural network operates, so they asked it to use its imagination. To daydream a little.

At first, they gave the computer abstract images to interpret — like a field of clouds. It was a Rorschach test. The artificial neurons saw what they wanted to see, which in this case were mutant animals dredged from the depths of damnation.

But the experiment really got out of control when the researchers asked the computer to apply its dream vision to perfectly normal pictures.

Google released the code for this dark art last week, encouraging everyone on the Internet to create their own nightmare images, or even animations. The following video — based on clips from Fear and Loathing in Las Vegas — will make you forever terrierfied of puppies:

How could anyone have let this happen?

The artificial intelligence at work here is the same kind technology that Google uses to automatically tag photos that you’ve uploaded, sometimes with eerie success. Last fall, its researchers demonstrated a neural network that could caption entire scenes: “A group of young people playing a game of frisbee,” or “A closeup of a cat laying on a couch.”

These systems take their inspiration from how the neurons in a brain work. An individual neuron is something like a simple switch — nothing too complicated. But, in a process that’s not fully understood, when a bunch of these switches are hooked up to each other, intelligence emerges.

The secret is in how they form the network.

The human brain, for instance, has some 80 billion neurons, each of which is connected to perhaps 10,000 other neurons. Most scientists believe that thoughts and memories are created out of the patterns in how neurons link up with one another. (That’s still a theory, of course. In fact, UCLA researchers recently found evidence that parts of memories might also persist inside neurons.)

Like a human brain, an artificial neural network, is only as good as what it has learned. A network for identifying images might be fed millions of labeled photos: This is a car. This is a cat. This is a rhino.

After looking at each photo, the artificial neurons adjust their connections to each other — strengthening some links, weakening others — so the system as a whole can better recognize the next car, cat, or rhino.

We need better ways of understanding how neural networks…work

A lot of research has gone into designing these networks and perfecting how they learn. But the process still has a whiff of magic about it. You don’t have to teach a neural network that rhinos have horns and cats have whiskers (it’s unclear if these are even concepts that it can understand). All you do is show the network millions of pictures and it will figure everything out on its own.

Sometimes the technology commits horrific mistakes. In June, it was discovered that Google’s algorithm mistook two black friends as gorillas.

Google quickly apologized. “There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future,” a company representative said in a statement at the time.

As a temporary fix, the company has stopped labeling anything “gorilla.” But it will take longer to teach Google’s artificial intelligence system not to make the same mistake in the future.

Everything that makes neural networks so magical also makes them extremely hard to diagnose. Look inside one of these artificial intelligence routines, and all you will see are connections. It’s near impossible for us to interpret what those connections mean — just as it is impossible, at the moment, to understand how the trillions or quadrillions of connections in a human brain create consciousness.

Why these nightmare scenes are scientifically interesting

Asking an artificial neural network to complete a Rorschach test is one way to check how it understands the world. The Google researchers call their code “DeepDream,” because in a way, that’s what the computer is doing: showing us all of the images that lurk in its subconscious.

By now, the Internet has gone wild using the DeepDream code to generate Chernobyl ghoul creatures:

Why are there so many inappropriate puppies in these pictures? Because by default, DeepDream analyzes a neural network that is obsessed with puppies (as well as slugs, chalices, and fish).

The network is called “BVLC GoogLeNet.” It’s based on a model designed by Google researchers that was one of the top performers at an international machine vision competition last year. Researchers at the Berkeley Vision and Learning Center made their own version and trained it on the 1.2 million images listed here.

There are pictures of butterflies, sea slugs, microphones, maracas, and hares — but none, it seems, of humans. Furthermore, there are an unusually large number of dog photos, because researchers wanted to teach the neural network how to distinguish between dog breeds.

Imagine if, from infanthood, your parents only showed you pictures of dogs. Wouldn’t you go a little dog crazy? Wouldn’t you start seeing dogs everywhere, even behind closed eyes?

Something similar is going on when BVLC GoogLeNet sprinkles puppies on everything. It doesn’t understand what a human face looks like. But it knows canines really well.

Switch to a neural network exposed to a different set of images, and you’ll see a different set of obsessions. A neural network trained to identify landscapes puts pagodas and forests everywhere:

Neural networks have to be taught, and in this they resemble human children. Kids might make their own decisions, but they remain products of their upbringing.

Children aren’t born racist, for instance, but they can learn to be racist. So when Google’s AI accidentally labeled two black people as gorillas, was that the AI’s fault? Or was it Google’s fault, for failing to train the AI on enough pictures of black people?

The programmer who discovered the gorilla snafu tweeted about such a suspicion: “My fear is that the [algorithm is] fine, but the data used to train results is faulty,” he wrote.

Increasingly, it will be our personal data that these neural networks are trained on. The AI team at Facebook has been researching these techniques so the company might better understand our online behavior, our likes and dislikes. But as we’ve seen, this kind of AI operates opaquely. Even the architects of these neural networks won’t fully understand why these blobs of simulated neurons make the choices they make.

When a trained neural network starts to become racist, or sexist, there isn’t a switch that can be flipped to cure it. But scientists can study the network. They can try to understand why it is the way it is by probing it. They can show it flashcards to see how it responds, or ask it to draw a picture.

Google’s DeepDream proposes an old solution to a new problem. It’s the equivalent of putting an AI on the psychiatrist’s couch and asking: What’s on your mind?

Loading...