“Killer robots” are firmly on the United Nations agenda. Earlier this year, experts and governments met in Geneva at the United Nations to discuss the ethics of deploying weapons capable of autonomously selecting and destroying human targets. Seventeen lawyers, roboticists and ethicists presented diverging view on the merits of such weapons. Delegates from 87 countries made remarks and asked pointed questions. From the sidelines, NGOs associated with the global coalition against these weapons – the Campaign to Stop Killer Robots – intervened with legal, moral and practical arguments against the idea of machines killing people, and emphasized the need for “meaningful human control” over all targeting decisions in war and police operations. Such advocates are calling for a pre-emptive global ban on fully autonomous weapons. Last week, the UN High Representative for Disarmament Angela Kane reaffirmed that sentiment, stating that autonomous weapons are a small step away and should be outlawed. A second experts meeting is planned for fall.
Perhaps unsurprisingly, science fiction references infused media coverage of the original meeting in May, just as a stock photo of Terminators attacking dressed up news coverage of Angela Kane’s remarks last week. The Wall Street Journal’s headline about the Experts’ Meeting read “It’s Judgment Day for Killer Robots at the UN” and included a “Robocop” still. Reuters used a similar headline and went with an image from “The Terminator.” A few reports used pictures of Cylons. At Mashable, readers were told: “The UN [is battling] killer robots. Yes, the robopocalypse might be coming.”
The media might be forgiven for using such terms and images as click-bait. But some people have accused the Campaign to Stop Killer Robots of invoking “Hollywood paranoia” as well. NBCNews tech writer Keith Wagstaff asked whether “hysteria over the robopocalypse could hold back technology that could save human lives.” At the conference, autonomous weapons proponent Professor Ronald Arkin criticized the global coalition for holding a position based on “pathos” and “hype.” Another expert, Nils Melzer of the Geneva Center for Security Policy began his slideshow with an image from “Terminator 2,” saying he would be taking an “objective” view rather than “demonizing” these weapons – a veiled jab at NGOs. Even earlier, Greg McNeal of Forbes Magazine criticized the campaign for “scare-mongering,” using Hollywood archetypes.
Is this fair? A closer look at the history and tactics of the global coalition tells a different story: a story of global civil society organizations maneuvering in a balanced way in a socio-cultural context in which they must persuade multiple stakeholders – governments, militaries, and the global public – to take a “far-out” issue dead seriously; and in which they face push-back by opponents who use claims of “hyperbole” in attempts to discredit them. In this version of the story, a number of common claims about the Campaign to Stop Killer Robots turn out to be myths.
1) Myth #1: Campaigners Are Reacting to Hype and Robopocalyptic Hyperbole.
Wrong: campaigners are reacting to a concern over the ethical implications of developments in real-world military robotics that they see as increasingly taking humans out of the loop when it comes to targeting decisions. Concerns about these developments were first raised in 2009 by the International Committee for Robot Arms Control, a network of concerned scientists and philosophers. Nowhere in their mission statement do they reference or give consideration to science fiction. Rather Noel Sharkey, a professor of robotics who co-founded ICRAC and often speaks on the perils of autonomous weapons, emphasized that his concern was with whether real-world robots would be able to comply with the laws of war. Sharkey told reporters in 2008, “One of the fundamental laws of war is being able to discriminate real combatants and non-combatants. I can see no way that autonomous robots can deliver this for us.”
The first NGO to call for a ban was Article36.org, an organization specializing in weapons and humanitarian law. This group too emphasized concerns with the weapons and the law, drawing connections between these systems and existing sensor-fuzed weapons: “Weapons that are triggered automatically by the presence or proximity of their victim can rarely be used in a way that ensures distinction between military and civilian.” The global coalition against autonomous weapons was launched in April 2013, led by Human Rights Watch, whose earlier report “Losing Humanity” explored trends in military robotics and considered their implications in the context of international humanitarian law. The cause quickly gained adherents among humanitarian disarmament NGOs. Most of these coalition members emphasize concerns with discrimination, proportionality, the principle of human dignity and whether availability of such weapons might hasten the rush to war, making armed conflict more likely.
Myth #2: The Issue Got Media Attention Because the Campaign used “Killer Robots” in its Name.
Actually, it’s the other way around. The media was using “Terminator” and “Battlestar Galactica” references to report on developments in autonomous weaponry long before NGOs picked up the issue – as long ago as 2007. But as I detail in my new book,”‘Lost’ Causes,” NGOs didn’t launch their campaign until 2013, six years after scientists first began raising the cry. Prior to that, when the International Committee for Robot Arms Control approached humanitarian disarmament campaigners for help, NGOs actually turned them down precisely because of the hype around such weapons. At that time, campaigners worried people would view the issue as “science fiction” or that they would be criticized for whipping up Hollywood-style fears. This concern is reflected in the earliest NGO statement on autonomous weapons, which self-defensively included a line addressing this counter-argument: “Some may dismiss the development of autonomous military robots as ‘science fiction’, but it is coming ever closer on the 21st Century battlefield with a variety of systems already developed and deployed that require (and are given) less and less human decision making and direct control.”
Once the campaign launched, this concern turned out to be well-founded. At that point, campaigners had to navigate in an already-sensationalistic cultural context to get stakeholders and the public to take their concerns seriously. They had to contend with the media sensationalizing their work, and with critics accusing them of scare-mongering. Adopting the term “killer robots” was a response to this problem. Veteran humanitarian disarmament campaigner Marc Garlasco said of the campaign: “Campaigners learned a lesson from Tyrion Lannister: if someone calls you a name you make it your armor. If you don’t mention sci-fi on this issue, it’s the elephant in the room. If you get it out of the way, and then just drop it and say ‘here’s real world,’ I think that’s very effective.” Indeed, in a recent Atlantic piece originally titled “Calling Autonomous Weapons Killer Robots is Genius,” Rose Evelth writes “the smartest thing the movement has done is pick its name. ‘Killer robots’ still isn’t a well-defined term, but it’s clearly a winning one.”
Myth #3: The Campaign Builds its Case on Robopocalyptic Metaphors.
Aside from adopting the label “killer robots” the campaign has generally avoided sci-fi metaphors and built its case on real-world substance. Consider: the campaign’s mascot is not a Cylon or T-2000 but rather a friendly non-lethal robot called David Wreckham. The campaign’s logo is not a beady-eyed T-2000 but instead a simple gears-and-crosshairs. These branding decisions were carefully considered, deliberated and chosen by campaigners in an effort to de-link their issue from the Hollywood hype surrounding it in people’s minds. At Geneva in May, no campaigner invoked “Robocop” or “Terminator” in their arguments on the floor. Rather, the speakers who referenced science fiction were autonomous weapon proponents aiming to dismiss the coalition by characterizing their arguments as “hype.”
Campaign press releases meant for the media and public do use the term “killer robots” because ordinary people will understand it, according to campaigners. But in their reports and remarks with governments and military lawyers, campaigners refer to “fully autonomous weapons,” principles of “proportionality and distinction,” “situational awareness,” and “meaningful human control.” Even campaign press releases meant for the public go quickly from an eye-catching title to a clear discussion of scientific, technical and legal realities. In this way, campaigners say, they are actually “de-science-fictionalizing” the issue. According to campaign coordinator Mary Wareham, “We said in our first press release that we’re not talking about the Terminator and we’re repeating that. The silly headlines are just a sign that the media haven’t grown up on this issue yet. Instead we use humor, we show that the campaign is not anti-robot: we love robots. Just don’t weaponize them.”
It is true that a few authors associated with the campaign have written op-eds featuring science fiction references in the headlines. But it is normally news editors, not authors, who select headlines, subheadings and images to go with opinion pieces: authors I have spoken to have often expressed surprise and dismay at the way Beltway publications and newspapers “dress up” their arguments in Hollywood images. In official campaign materials, the Campaign to Stop Killer Robots has stayed on message: precursors to fully autonomous weapons are disturbingly real, fully autonomous weapons aren’t here yet, NGOs want to keep it that way, and “robots are not for killing people.”
4) Myth #4: Sensationalistic “Terminator” References are Unnecessarily Scaring the Public.
It’s easy to chalk public antipathy to killer robots up to robopocalyptic fiction stoked by disarmament campaigners. But actually, survey research has shown the average U.S. citizen is equally horrified by the idea of autonomous weapons whether they are referred to as autonomous weapons or killer robots. And they are equally horrified whether or not they report ever having seen the film “Terminator.”
A national survey carried out at the University of Massachusetts in May last year asked Americans how they felt about the idea of deploying autonomous weapons. Earlier in the survey (which included other questions) they were also asked about blockbuster movies they might have seen, including “Terminator.” But only half the respondents were asked about movies before they gave their answer on killer robots. The result? Neither movie viewership nor being “primed” with the “Terminator” references had a significant impact on whether respondents supported the idea of autonomous weapons.
Instead, when asked to explain why they answered the way they did, a majority of respondents opposing the weapons cited moral principles: the importance of human ethical judgment, the importance of human accountability for mistakes, the possibility that machines could be used tyrannically. In short, Americans are scared of machines killing humans because the idea is scary – not because of Hollywood hype.
Charli Carpenter is a Professor of Political Science at the University of Massachusetts and blogs at Duck of Minerva. She is the author of ‘Lost’ Causes: Agenda-Vetting in Global Issue Networks and the Shaping of Human Security and a contributor to Battlestar Galactica and International Relations. She can be reached at firstname.lastname@example.org.