The basic idea is to do AI “the American way,” as people used to say, by framing a set of clear, ethical rules through public debate. This AI Principles Project was launched in October by the Pentagon’s Defense Innovation Board. The first major public meeting took place on Tuesday at Harvard, where Pentagon officials met with about a dozen AI experts, some of them strong critics of U.S. military actions.
The Harvard roundtable discussion was lively, and occasionally sharp, a participant said. The group debated privacy concerns, the trade-off between an algorithm’s power and its ability to explain its results, methods for establishing human accountability for AI actions, and other legal and moral issues. Similar expert gatherings are planned at Carnegie Mellon University in March and Stanford University in April, and then the board will release draft principles for public comment.
The Pentagon outreach was deliberately aimed at engineers who don’t like the idea of working with the U.S. military. As the innovation board’s statement announcing the ethics dialogue put it, “We are taking care to include not only experts who often work with the [Department of Defense], but also AI skeptics, department critics and leading AI engineers who have never worked with DoD before.”
“If we’re going to be the arsenal of democracy in the 21st century, we have to show that we have ideals and are ready to stand up for them,” said a Pentagon official involved in the program. “It wasn’t going to be enough to say, ‘Hey, we’re the good guys, we’re Americans.’ We needed to be more introspective.”
This bridge-building to the tech community follows a potentially disastrous rupture last year, when Google employees rebelled at a Pentagon AI effort called Project Maven. It was a relatively small, $9 million contract to write algorithms for nonlethal monitoring of surveillance videos to detect threatening movement. Neither the company nor the Pentagon foresaw the controversy that erupted when thousands of Google employees signed a protest petition; the company had to retreat and declined to renew the contract.
Behind the Maven flap lie some fascinating crosscurrents. Senior Google executives had wanted a larger piece of the government’s national security business, which has been dominated by Amazon and Microsoft. But they were secretive with employees about the project. A tight-lipped Pentagon worsened the public-relations disaster. Google employees felt misled, and Pentagon officials were enraged that the tech engineers had scuttled a project aimed at detecting terrorist threats.
Google Chief Executive Sundar Pichai’s withdrawal from Maven was driven, above all, by opposition from some of the top engineers on whom Google’s future rests. A Pentagon official recalls trying to explain to one of these AI gurus that the U.S. Constitution and Bill of Rights would prevent excesses by the United States. The engineer replied that these safeguards meant little to him because he wasn’t a U.S. citizen.
The Google revolt hasn’t yet spread across Silicon Valley, despite efforts by some engineers to organize such a boycott. Top executives at Microsoft and Amazon have resisted employee protest and reaffirmed their willingness to work on classified contracts for the military and the intelligence community, such as the huge JEDI cloud-computing project. (Amazon’s founder and chief executive, Jeffrey P. Bezos, owns The Post.)
The engineers aren’t wrong in demanding that the Pentagon set rules for this new domain of warfare. “There has been a lack of clarity from DoD about how it will use AI,” said Paul Scharre of the Center for a New American Security.
The Pentagon-Silicon Valley dialogue is wary and awkward, and it could collapse — with dire consequences for the United States’ future military strength. The Pentagon official explained the one reason it might work: “We’ve created ways for people who hate us to express their views. That’s what makes us different from a closed society.”