Named after a character in Greek mythology, the Perdix is a cheap, lightweight, 3-D-printed drone that is capable of low‐altitude reconnaissance and “other missions,” according to a Pentagon fact sheet. The drone was originally designed by engineering students at MIT.
The drones, instead of individually being given an order, are instead told to conduct broad tasks, such as observing an enemy airfield. The drone swarm then executes the mission, flying and adapting by communicating with one another with little human input.
“Due to the complex nature of combat, Perdix are not pre-programmed synchronized individuals, they are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” said William Roper, the director of the Strategic Capabilities Office.
Perdix drones were first dropped in 2014 from a F-16 fighter over Edwards Air Force Base, Calif., according to the fact sheet. After that, they were used during military exercises in Alaska in 2015.
Although the Perdix is billed as a surveillance tool, it would take little to imagine the small devices as half-foot-long bombs launched to overwhelm a target by sheer volume or to decimate an enemy airfield or ship without endangering pilots.
In the statement, Roper emphasized that human operators will always “be in the loop” when it comes to the Pentagon’s burgeoning autonomous systems such as the Perdix swarms, meaning that an actual person will have the final say on how the drones and other similar equipment is employed.
Although the Perdix is part of the Pentagon’s “future battle network,” according to Roper, it is unclear exactly what that means when it comes to how the small devices will be used. With some human input, though, the Perdix is technically semiautonomous. Drones or weapons that can select and engage targets by themselves — something the Perdix might be able to do in the future — are considered fully autonomous.
In a February report, Paul Scharre, a senior fellow at Center for a New American Security, highlighted the risk of autonomous weapons stating that they “pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces.”
“This could be because of hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors,” Scharre wrote. “Moreover, as the complexity of the system increases, it becomes increasingly difficult to verify the system’s behavior under all possible conditions; the number of potential interactions within the system and with its environment is simply too large.”
Human Rights Watch has called for “an international treaty preemptively banning the development, production, and use” of autonomous weapons through their effort called the Campaign to Stop Killer Robots.