The Washington PostDemocracy Dies in Darkness

The military wants AI to replace human decision-making in battle

The development of a medical triage program raises a question: When lives are at stake, should artificial intelligence be involved?

(Sgt. Isaiah Campbell/AP)
6 min

When a suicide bomber attacked Kabul International Airport in August last year, the death and destruction was overwhelming: The violence left 183 people dead, including 13 U.S. service members.

This kind of mass casualty event can be particularly daunting for field workers. Hundreds of people need care, the hospitals nearby have limited room, and decisions on who gets care first and who can wait need to be made quickly. Often, the answer isn’t clear, and people disagree.

The Defense Advanced Research Projects Agency (DARPA) — the innovation arm of the U.S. military — is aiming to answer these thorny questions by outsourcing the decision-making process to artificial intelligence. Through a new program, called In the Moment, it wants to develop technology that would make quick decisions in stressful situations using algorithms and data, arguing that removing human biases may save lives, according to details from the program’s launch this month.

Though the program is in its infancy, it comes as other countries try to update a centuries-old system of medical triage, and as the U.S. military increasingly leans on technology to limit human error in war. But the solution raises red flags among some experts and ethicists who wonder if AI should be involved when lives are at stake.

“AI is great at counting things,” Sally A. Applin, a research fellow and consultant who studies the intersection between people, algorithms and ethics, said in reference to the DARPA program. “But I think it could set a [bad] precedent by which the decision for someone’s life is put in the hands of a machine.”

The U.S. says humans will always be in control of AI weapons. But the age of autonomous war is already here.

Founded in 1958 by President Dwight D. Eisenhower, DARPA is among the most influential organizations in technology research, spawning projects that have played a role in numerous innovations, including the Internet, GPS, weather satellites and, more recently, Moderna’s coronavirus vaccine.

But its history with AI has mirrored the field’s ups and downs. In 1960s, the agency made advances in natural language processing, and getting computers to play games such as chess. During the 1970s and 1980s, progress stalled, notably due to the limits in computing power.

Since the 2000s, as graphics cards have improved, computing power has become cheaper and cloud computing has boomed, the agency has seen a resurgence in using artificial intelligence for military applications. In 2018, it dedicated $2 billion, through a program called AI Next, to incorporate AI in over 60 defense projects, signifying how central the science could be for future fighters.

“DARPA envisions a future in which machines are more than just tools,” the agency said in announcing the AI Next program. “The machines DARPA envisions will function more as colleagues than as tools.”

The future of warfare could be a lot more grisly than Ukraine

To that end, DARPA’s In the Moment program will create and evaluate algorithms that aid military decision-makers in two situations: small unit injuries, such as those faced by Special Operations units under fire, and mass casualty events, like the Kabul airport bombing. Later, they may develop algorithms to aid disaster relief situations such as earthquakes, agency officials said.

The program, which will take roughly 3.5 years to complete, is soliciting private corporations to assist in its goals, a part of most early-stage DARPA research. Agency officials would not say which companies are interested, or how much money will be slated for the program.

Matt Turek, a program manager at DARPA in charge of shepherding the program, said the algorithms’ suggestions would model “highly trusted humans” who have expertise in triage. But they will be able to access information to make shrewd decisions in situations where even seasoned experts would be stumped.

For example, he said, AI could help identify all the resources a nearby hospital has — such as drug availability, blood supply and the availability of medical staff — to aid in decision-making.

“That wouldn’t fit within the brain of a single human decision-maker,” Turek added. “Computer algorithms may find solutions that humans can’t.”

Sohrab Dalal, a colonel and head of the medical branch for NATO’s Supreme Allied Command Transformation, said the triage process, whereby clinicians go to each soldier and assess how urgent their care needs are, is nearly 200 years old and could use refreshing.

Similar to DARPA, his team is working with Johns Hopkins University to create a digital triage assistant that can be used by NATO-member countries.

The triage assistant NATO is developing will use NATO injury data sets, casualty scoring systems, predictive modeling, and inputs of a patient’s condition to create a model to decide who should get care first in a situation where resources are limited.

“It’s a really good use of artificial intelligence,” Dalal, a trained physician, said. “The bottom line is that it will treat patients better [and] save lives.”

The U.S. system created the world’s most advanced military. Can it maintain an edge?

Despite the promise, some ethicists had questions about how DARPA’s program could play out: Would the data sets they use cause some soldiers to get prioritized for care over others? In the heat of the moment, would soldiers simply do whatever the algorithm told them to, even if common sense suggested different? And, if the algorithm plays a role in someone dying, who is to blame?

Peter Asaro, an AI philosopher at the New School, said military officials will need to decide how much responsibility the algorithm is given in triage decision-making. Leaders, he added, will also need to figure out how ethical situations will be dealt with. For example, he said, if there was a large explosion and civilians were among the people harmed, would they get less priority, even if they are badly hurt?

“That’s a values call,” he said. “That’s something you can tell the machine to prioritize in certain ways, but the machine isn’t gonna figure that out.”

Meanwhile, Applin, an anthropologist focused on AI ethics, said as the program shapes out, it will be important to scan for whether DARPA’s algorithm is perpetuating biased decision-making, as has happened in many cases, such as when algorithms in health care prioritized White patients over Black ones for getting care.

“We know there’s bias in AI; we know that programmers can’t foresee every situation; we know that AI is not social; we know AI is not cultural,” she said. “It can’t think about this stuff.”

And in cases where the algorithm makes recommendations that lead to death, it poses a number of problems for the military and a soldier’s loved ones. “Some people want retribution. Some people prefer to know that the person has regret,” she said. “AI has none of that.”