The Washington PostDemocracy Dies in Darkness

Elon Musk and Stephen Hawking think we should ban killer robots

A T-800 Terminator in a scene from "Terminator Salvation," a Warner Bros. Pictures release.
Placeholder while article actions load

You'd think some of the world's greatest minds would be more into artificial intelligence. But not, at least, when it comes to arming it.

An open letter signed by Elon Musk, Stephen Hawking and Steve Wozniak, among others, is making the case (again) that weaponized robots could lead to "a global AI arms race" that turns self-directed drones into "the Kalashnikovs of tomorrow."

"We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so," the open letter reads. "Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."

This isn't the first time these technologists have warned of the dangers of artificial intelligence. Musk has warned before that there "needs to be a lot more work on AI safety," and a previous open letter from Musk, Hawking, Wozniak and others spoke of the "pitfalls" that lay in wait if the research wasn't done carefully.

[Elon Musk, gaming supernerd]

The prospect of weaponized autonomous drones is no doubt a tempting one for some militaries: They would be able to compensate for lack of manpower, surprise their enemies and turn war into a virtually bloodless (and therefore relatively cheap) affair. And it would be no surprise if, upon seeing their rivals get hold of the technology, for other countries to want killer robots, too.

The solution, according to Musk and others, is a ban on autonomous weapons, similar to the kind that governs chemical weapons.

History suggests that such a ban could be hard to approve, let alone enforce: Despite many major powers signing the 1925 Geneva Protocol banning the use of chemical and biological weapons, other countries such as Japan and the United States did not become signatories until as late as the 1970s, according to the Arms Control Association. And even then, claims were still made about the use of such weapons in violation of the ban.

P.W. Singer is the author of "Wired for War: The Robotics Revolution and Conflict in the 21st Century" and a researcher at the New America Foundation who studies the future of warfare. When I asked him last month about the chances of the ban on weaponizing outer space surviving through the next few decades, he had this to say:

Would a treaty hold? I hope it would, because space is the one domain we've not fought in. Yet. History shows it's likely going to be more like the various treaties of the 1920s and 1930s that everyone signed up to. One, they really didn't respect them during the period, like we're seeing with these space weapons tests. But also when push came to shove in an actual war, they junked them.

Singer added that even though many countries have agreed not to militarize outer space, they still maintain programs "designed to fight in space and deny it to the other side [and] in the last year have ramped up those programs."

[From unmanned fighters to orbital lasers, how the U.S. and China could fight a war]

Bans on specific weapons or types of weapons can be extremely complicated. A somewhat less formal — though no less effective — approach that could emerge is simply the general unspoken agreement that using killer drones could be lethal to society.

This would resemble much more the norm against using nuclear weapons in anger. While there are treaties to prevent the spread of nuclear weapons, there isn't such a formal document governing their use. Today we mostly rely on the fear of mutual destruction and voluntary commitments by countries such as China and India on a "no-first-use" policy that only permit the firing of nuclear weapons in response to a nuclear attack.