Tesla chief executive Elon Musk has said that artificial intelligence is more of a risk to the world than is North Korea, offering humanity a stark warning about the perilous rise of autonomous machines.
Now the tech billionaire has joined more than 100 robotics and artificial intelligence experts calling on the United Nations to ban one of the deadliest forms of such machines: autonomous weapons.
“Lethal autonomous weapons threaten to become the third revolution in warfare,” Musk and 115 other experts, including Alphabet’s artificial intelligence expert, Mustafa Suleyman, warned in an open letter released Monday. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend.”
According to the letter, “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
The letter — which included signatories from dozens of organizations in nearly 30 countries, including China, Israel, Russia, Britain, South Korea and France — is addressed to the U.N. Convention on Certain Conventional Weapons, whose purpose is restricting weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately,” according to the U.N. Office for Disarmament Affairs. It was released at an artificial intelligence conference in Melbourne, Australia, ahead of formal U.N. discussions on autonomous weapons. Signatories implored U.N. leaders to work hard to prevent an autonomous weapons “arms race” and “avoid the destabilizing effects” of the emerging technology.
In a report released this summer, Izumi Nakamitsu, the head of the disarmament affairs office, said that technology is advancing rapidly but that regulation has not kept pace. She pointed out that some of the world’s military hot spots already have intelligent machines in place, such as “guard robots” in the demilitarized zone between South and North Korea.
For example, the South Korean military is using a surveillance tool called the SGR-AI, which can detect, track and fire upon intruders. The robot was implemented to reduce the strain on thousands of human guards who man the heavily fortified, 160-mile border. While it does not operate autonomously yet, it does have the capability to, according to Nakamitsu.
“The system can be installed not only on national borders, but also in critical locations, such as airports, power plants, oil storage bases and military bases,” says a description in a video released by Samsung, which makes the SGR-AI.
Samsung didn’t immediately respond to a request for comment.
“There are currently no multilateral standards or regulations covering military AI applications,” Nakamitsu wrote. “Without wanting to sound alarmist, there is a very real danger that without prompt action, technological innovation will outpace civilian oversight in this space.”
According to Human Rights Watch, autonomous weapons systems are being developed in many of the nations represented in the letter — “particularly the United States, China, Israel, South Korea, Russia and the United Kingdom.” The concern, the organization says, is that people will become less involved in the process of selecting and firing on targets as machines lacking human judgment begin to play a critical role in warfare. Autonomous weapons “cross a moral threshold,” HRW says.
“The humanitarian and security risks would outweigh any possible military benefit,” HRW argues. “Critics dismissing these concerns depend on speculative arguments about the future of technology and the false presumption that technical advances can address the many dangers posed by these future weapons.”
In recent years, Musk’s warnings about the risks posed by AI have grown increasingly strident — drawing pushback in July from Facebook chief executive Mark Zuckerberg, who called Musk’s dark predictions “pretty irresponsible.” Responding to Zuckerberg, Musk said his fellow billionaire’s understanding of the threat post by artificial intelligence “is limited.”
Last month, Musk told a group of governors that they need to start regulating artificial intelligence, which he called a “fundamental risk to the existence of human civilization.” When pressed for concrete guidance, Musk said the government must get a better understanding of AI before it’s too late.
“Once there is awareness, people will be extremely afraid, as they should be,” Musk said. “AI is a fundamental risk to the future of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to individuals as a whole.”