On a sunny day last September, a ship with no humans aboard fired its missiles into the sky, inching the military toward a sought-after goal: building an armada of drone ships that use artificial intelligence instead of sailors to fight at sea.
But in recent weeks, a growing chorus of watchdogs, experts, analysts and former naval officers have cautioned that any grand plans to use artificial intelligence to modernize naval fleets in short order are unrealistic, because of budgetary constraints and the difficultly of building artificial intelligence that can adequately replace sailors.
Moreover, ethicists and experts raise red flags that deploying drone ships that can patrol the seas and shoot enemies carries geopolitical repercussions.
“It raises the stakes a lot,” said Peter Asaro, an artificial intelligence expert at the New School in New York. “Small countries like North Korea or the Philippines could just crank out a bunch of little [naval] robots and suddenly have a very strong defense mechanism against a big military or Navy like the U.S.’s.”
See the game-changing, cross-domain, cross-service concepts the Strategic Capabilities Office and @USNavy are rapidly developing: an SM-6 launched from a modular launcher off of USV Ranger. Such innovation drives the future of joint capabilities. #DoDInnovates pic.twitter.com/yCG57lFcNW— Department of Defense 🇺🇸 (@DeptofDefense) September 3, 2021
After conducting an 18-month-long study on the program, Shelby S. Oakley, a director at the Government Accountability Office, said there are a number of issues.
According to her report, the ships will cost “billions” more to build, deploy and outfit with software than estimated — and, at present, the military has only vague plans for how they’ll be used. But most notably, the Navy doesn’t have a good grasp of the technological challenges it is going to face in trying to make these vessels autonomous, the report found.
For example, Oakley said, if the Navy envisions autonomous vessels being out at sea for 30 to 60 days on end, artificial intelligence software will have to do many things sailors normally would do to maintain the ship: things like keeping the ship’s hull intact and performing oil changes.
“It’s going to require a huge investment in this digital infrastructure, in the artificial intelligence … to achieve that,” she said. “And that’s where we’ve seen underfunding on the part of the Navy.”
This is not the first time the Navy’s program has gotten scrutiny for being unclear about its autonomous vessel plans. In March 2021, shortly after the Navy released its road map for commissioning autonomous vessels, legislators in Congress raised concerns.
Rep. Elaine Luria (D-Va.), a former naval officer, said she “was really disappointed with the lack of substance” in the Navy’s report, called the Unmanned Campaign Plan. “I thought it was full of buzzwords and platitudes but really short on details,” she said then in a House Armed Services subcommittee hearing.
Despite the criticism, the end goal is ambitious. In December 2020, the Navy’s 30-year shipbuilding plan called for 143 autonomous ships to be in service by 2045, with the first 21 to be ready by 2025. The Navy estimates those initial vessels’ cost at $4.3 billion. For the rest, the Congressional Budget Office estimated an average of $1.2 billion per year, reports show.
At the moment, the Pentagon has six vessel prototypes it plans on testing. They range from medium and large ships that operate above water to large and extra-large submarine-like vessels that can operate deep undersea.
According to Defense Department regulations, humans are required to be involved in deploying weapons from vessels. But experts said it is possible that rule might change if China develops the capability to jam signals that autonomous vessels would use to talk with command centers and other ships in the area.
The U.S. says humans will always be in control of AI weapons. But the age of autonomous war is already here.
Tom Shugart, an adjunct senior fellow at the Center for a New American Security, said the Navy is building these ships with one enemy primarily in mind: China.
The Chinese military, he said, has made significant advances in reconnaissance and missile technology: most notably, a long-range ballistic missile known by military analysts as the “Guam Killer,” with a reported range of 1,800 to 2,500 miles. With those advances, Beijing could better target and strike U.S. sailors stationed near Taiwan and the Philippines.
Having autonomous ships can help shift battle calculus into American favor, Shugart said. With unmanned vessels, fewer lives are at risk of being lost to Chinese missiles. Also, naval attack formations could be larger, more spread-out and monitored by command centers far away from an enemy’s missile range.
“This missile threat has changed things,” Shugart said.
Despite the benefits these unmanned vessels might provide, experts have questions about what the Navy wants them to do: Do they want vessels that can operate autonomously deep into the sea for weeks on end? Do they simply want vessels that can fire missiles without risking lives? Do they want a large number of ships that can be built, maintained and operated at a lower price?
Gregory V. Cox, an analyst at the Institute for Defense Analyses, said the Navy is going to have to choose, and it can’t have it all. “Pick two,” he said. “Fast. Cheap. Good. But you don’t get all three. Which one do you want to give away?”
Meanwhile, experts and ethicists said there are implications for outfitting a naval fleet with ships that could, one day, shoot missiles autonomously.
Asaro, of the New School, weighed the risks against the potential benefits an autonomous vessel could have. On the one hand, he said, it can mitigate the risk of lost life, and most surface vessels the Navy wants to make autonomous can have human intervention when firing missiles.
But some of the prototypes the Navy wants to test, such as the submarine-like vessels, would be required to fire autonomously, since they would operate too deep undersea for humans to be able to interact with them, he said.
“It’s tricky,” Asaro said. “But in the end, it’s an autonomous weapon system, unsupervised making targeting decisions, and firing lethal weapons. So that’s a big concern.”
An earlier version of this story misidentified the academic affiliation of the Institute for Defense Analyses. It is an independent organization. This version has been corrected.