Arnold Schwarzenegger in a scene from the futuristic action thriller "Terminator 3: Rise of the Machines." (Robert Zuckerman/Warner Bros.)

It's the Summer of Killer Robots! Or let's declare it to be so (trend-spotting promiscuously).

It's the KR Summer not just because Hollywood has put out another "Terminator" movie. There's been a lot of news about autonomous weaponry. See, for example — after you've finished this item, please — this excellent BBC report from South Korea and the Super aEgis II. It's an automated turret that can function — potentially fully autonomously, without human supervision, though that feature has been disabled — as a sentry on the contested border with North Korea.

The discussion about KRs heated up in late July when some 14,000 researchers signed an open letter, produced by the Future of Life Institute (basically, physicist Max Tegmark and friends, including Stuart Russell and Toby Walsh) demanding that world leaders implement a ban on autonomous weaponized machines. That incited debate on the Internets about whether "killer robots" (a term that many researchers hate) would necessarily be a bad thing. This is all part of a much broader discussion about where we are in our relationship to machines, and whether we're still the masters of our tools.

Bear with me while I give you some of the tick-tock of this tech talk.

The open letter warns of an A.I. arms race if we don't put the clamps on these autonomous weapons:

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

A writer named Evan Ackerman wrote a response in IEEE Spectrum. He said a ban is not workable. You can already buy a quadcopter and it would be easy to weaponize it, he suggested:

The problem with this argument is that no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.

Ackerman continues:

Generally speaking, technology itself is not inherently good or bad: it’s what we choose to do with it that’s good or bad, and you can’t just cover your eyes and start screaming “STOP!!!” if you see something sinister on the horizon when there’s so much simultaneous potential for positive progress.

He thinks we should program robots to make ethical choices. They will actually be better at that, someday, than human soldiers trying to make decisions in the heat of battle.

Russell, Tegmark and Walsh then responded to Ackerman:

"...the world community has rather successfully banned biological weapons, space-based nuclear weapons, and blinding laser weapons; and even for arms such as chemical weapons, land mines, and cluster munitions where bans have been breached or not universally ratified, severe stigmatization has limited their use."

"...the key issue is the likely consequences of an arms race — for example, the availability on the black market of mass quantities of low-cost, anti-personnel micro-robots that can be deployed by one person to anonymously kill thousands or millions of people who meet the user’s targeting criteria. Autonomous weapons are potentially weapons of mass destruction."

I asked Stuart Russell by e-mail to explain a bit more about how autonomous weapons might be deployed by bad elements. He wrote back:

I'm not really a weapons designer, but it's only a small extrapolation from the DARPA FLA program (small high-speed quadcopters zooming in and out of buildings) and the CODE program ("hunting in packs like wolves") to imagine dumping truckloads of flying microrobots the size of large insects, each carrying a 1g shaped charge to blow holes in peoples' heads or a microrifle to shoot their eyes out. They might need some larger ones to blow holes in doors and walls and stop vehicles. They are totally expendable and very cheap. Planners also seem to be thinking about naval and air-to-air combat which would involve much more expensive assets, but the principle is the same — overwhelming numbers, cooperative behaviors, etc.

So we're in a new era here. The obvious analogy is to the development of nuclear weapons. Oppenheimer and Szilard warned of an arms race and lost the argument to Teller, Von Neumann and others who wanted to go full speed ahead. The U.S. and Soviets built massive arsenals and placed each other under the threat of nuclear doomsday for decades. Arms control treaties have made the world safer, though. And scientists and engineers have often recognized that there are no-go zones. When gene-splicing became possible, everyone called time-out and held a big conference at Asilomar. More recently, scientists called for a ban on gene-editing with the "crispr" technique.

You can't unlearn knowledge — but sometimes you can keep it in a box. A well-sealed one, ideally.

 

Further Reading:

Autonomous weapons expert Sam Wallace, on kurzweilai.net, disagrees with the idea of a ban

Russell, Tegmark and Walsh responded to Wallace.