As April makes way for summer movie season, the past few weekends have been dismal for moviegoers looking for something to stimulate the brain as well as the eye — with one exception: “Ex Machina,” a smart and sexy sci-fi thriller about a computer geek (Domhnall Gleeson) who is recruited by a reclusive tech entrepreneur (Oscar Isaac) to test the artificial intelligence of a rebellious female robot named Ava (Alicia Vikander). Questions about the nature of consciousness and free will percolate throughout the film, which just expanded to 1,200 theaters.
“Ex Machina” is the latest (and best) in a recent string of similarly themed films — “Transcendence,” “Automata,” “Chappie” and “Eva” — that grapple with the theme of robots and artificial intelligence (or AI). And the streak is not over. Two likely summer blockbusters — “Avengers: The Age of Ultron” and “Terminator: Genisys” — also have storylines about robots that become too smart for our own good.
We spoke with “Ex Machina” filmmaker Alex Garland, a British novelist and screenwriter whose credits include the scripts for “28 Days Later” and “Never Let Me Go,” picking his brain about the roots of this seeming cinematic obsession.
What does our enduring fascination with robots and artificial intelligence say about us?
The truth is, I don’t know. With that caveat, I have been thinking about this for a few years, and I can try to make an educated guess. It certainly looks like there’s something in the zeitgeist about it. If there had been a seismic breakthrough in artificial intelligence research, say, three years ago — because that’s roughly the cycle of filmmaking — then you could understand it. But there hasn’t been a breakthrough, so my instinct is to look somewhere else.
I personally look at the fact that there are these enormous tech companies that have power that seems to grow exponentially. There’s something disproportionate about the incredible rapidity of the way they stake a claim on the world. There’s also a sort of adjunct quality, which is that we access these tech companies via cellphones and computers and tablets, and yet we don’t really understand how they work. Yet conversely, these things seem to understand quite a lot about us. It’s actually the tech company, but it can seem to be the machine, because it will anticipate the thing that we’re trying to type into the search engine. It understands something about our shopping habits and things that make us feel slightly uneasy. On top of that, we’ve known, even predating Edward Snowden’s revelations, that largely what these companies were doing was storing massive amounts of information. It gets called “big data,” but it’s also quite small data. It’s very specific and tailored to an individual. On an unconscious level, and also on a reasonable level, it makes us uncomfortable. I actually feel that these narratives come more out of that than anything specific to do with artificial intelligence.
Isn’t our discomfort with technology contradicted, to some degree, by our insatiable appetite for it?
Without question, yeah.
“Transcendence” and “Chappie” each feature a dying character who seeks a kind of immortality by transferring his consciousness into a machine. Are these movies a form of artistic wish fulfillment?
I know for a fact that for some of the people who are actively involved in dropping enormous amounts of money into AI research, that is explicitly and openly their motivation. That is, to upload themselves in order to live longer in another form.
Why does that fantasy hold so much appeal?
Because we’re mortal. Even religious people who believe in an afterlife will have a sense that something very fundamental about them is not going to continue. My approach to it was not to look at the individual extending his own lifespan, but more to see the creation of AI as a parental act. So the AI will have its own life that will extend beyond, where the “child” goes off and does its own thing, and the parent unfortunately is left behind.
Isaac Asimov famously articulated three laws of robotics, the first of which states that a robot “may not injure a human being or, through inaction, allow a human being to come to harm.” Yet these laws are routinely violated in most contemporary robot movies, including your own.
Those Asimov laws have always felt to me like a real problem, because they preclude free will. You could debate whether humans have free will, but we certainly think we have it. We act as if we have it.
While maybe, in reality, we’re living in “The Matrix”?
Absolutely. I could always understand the logic, but they’re not actually laws. There is no science fiction court that’s going to prosecute me because I’ve failed to observe them. I think they’re problematic anyway. If you were able to go to a computer and you said, “I’m going to switch you off,” and the computer said, “I don’t want you to switch me off,” and if you had reason to believe that this wasn’t just an automatic statement — that the computer had some kind of emotional internal life — at that point you’ve got an ethical problem. I suspect that if you had a sentient machine, you’d have to start giving it pretty much what we currently call human rights.
“Ex Machina” wrestles with themes that many robot movies don’t even seem to be aware of.
I avoided all these other films because I didn’t want to get intimidated or frustrated by them. My intention was to tell a story that is effectively on the side of the machine. It was not a moralizing, cautionary tale about not messing with God’s work. The rules we make about each other really relate fundamentally to our minds. That’s why we can cut down a tree but not murder a human. As to the film, yeah, it attempted to run straight on at that stuff. It’s an ideas movie, I guess.
Some of its ideas come from Murray Shanahan’s 2010 book, “Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds.”
That was the book that crystallized it. I’d had the ideas in my head for maybe a decade, but while I was reading that book I had a nonreligious epiphany.
Where did the first idea come from?
I’ve got an older, much smarter friend whose key area of interest and work is neuroscience. His position is that machines are never going to be sentient, that there’s something specific about human consciousness that we don’t understand but that when we do understand how it works, it will preclude the possibility of a sentient machine. From my point of view, the problem with that is that it sounds more like metaphysics than science. I started reading about it in order to try and understand it better, and I have to say that in my years of reading and thinking about this, my suspicion has only gotten stronger that what my friend was saying is probably wrong.
The robot Ava in “Ex Machina” has a body, but couldn’t you have gone with a brain in a jar?
Yes, but you could also envisage a form of AI that is not a brain in a jar, but a brain in a spaceship, like HAL in “2001.” There’s a strong case to say that consciousness needs to be embodied in order to properly exist. That said, I was interested in imagining a human-like intelligence that shared concerns, distractions and fears like our own, a machine that could experience pleasure and might have a fear of death. When the first strong AI is eventually created, it probably won’t be very much like us. Ava is me taking a bit of a leap. You can see a dog is sentient. But it’s impossible for you or I to imagine what it’s like to be a dog. Whereas you could probably approximate what my thought processes are quite accurately, because they would be like yours. When the first strong AI gets here, I think it will be more like a dog than like us.
You’re quoted as saying that science “tells us where we are at intellectually.” Does science fiction tell us where we are at emotionally?
It can. Sci-fi is never completely unrelated to the time in which it was written. It tends to be a reflection of something current — Cold War fears or a critique of totalitarianism — even though it appears to anticipate something that’s coming in the future. Asimov and Arthur C. Clarke were examples of that, trying to figure out what consequence would there be, if any, from having robots, and how we treat them, and they treat us.
Your film suggests that the creation of AI entails an inevitable clash.
There’s a scene in “Ex Machina” where Ava turns to her creator and says, “What’s it like to have made something that hates you?” That, to me, is like an adolescent who is about to abandon the parent. It’s like a dad in the teenage daughter’s bedroom, and the daughter is basically telling him to push off.
If you had to predict a time frame for the development of true AI, what would be your educated guess?
I’d say it’s not imminent. I partly say that because of how it feels. But all throughout this film, I got to meet a variety of people who are at the absolute cutting edge of these areas, either in understanding the mechanics of human consciousness or strong AI, and I got the same message from all of them. Which is that it’s pretty hard, and it’s not around the corner. We’re asking big questions, but there’s so much left to learn. As physicist Dick Taylor said, “The larger the searchlight, the larger the circumference of the unknown.”