The Defense Advanced Research Projects Agency, part of the Defense Department, said it will fund dozens of new research efforts as part of a “Third Wave” campaign aimed at developing machines that can learn and adapt to changing environments.
DARPA director Steven Walker announced the effort Friday to an audience from American academia, private industry and the military at a symposium outside Washington, saying the agency wants to explore “how machines can acquire human-like communication and reasoning capabilities.”
“This is a massive deal. It’s the first indication that the United States is addressing advanced AI technology with the scale and funding and seriousness that the issue demands,” said Gregory C. Allen, an adjunct fellow specializing in AI and robotics for the Center for a New American Security, a Washington think tank. “We’ve seen China willing to devote billions to this issue, and this is the first time the U.S. has done the same.”
But DARPA’s expansion comes at a time of tension between government agencies and the tech giants who employ some of the world’s most in-demand AI talent. In June, Google announced it would not renew its Defense Department contract to help develop AI that could analyze drone footage, known as Project Maven, following a worker uprising against what some inside the company called the “business of war.”
The new DARPA money, some AI researchers said, appeared to convey a message: If Google doesn’t want to help the military develop AI, someone else will.
“There are hundreds if not thousands of schools and companies that bid aggressively on the programs,” Peter Highnam, DARPA’s deputy director, said in a June interview. “They see incredibly interesting, challenging problems. They see data sets they would never have access to. And they see ways to … build the labs to do what they're there for, which is hard science and engineering.”
The agency said the new money would fund projects on top of the more than 20 active programs exploring cutting-edge applications of AI, including in cybersecurity; the detection of AI-created fake audio or video; and in “human-computer symbiosis” programs targeting the interaction between people and machines.
Some proposals would use AI to tackle logistical challenges, such as vetting people for security clearances or reducing the data or power needs of military machines. But others are more conceptual, including programs devoted to “explainable AI,” a growing movement in the field devoted to designing software that can spell out how it came to a conclusion and justify its response.
AI is a broad term for the sophisticated software that forms the backbone of technologies such as facial-recognition systems, virtual assistants and self-driving cars.
One of Silicon Valley’s most competitive arenas, it’s also of increasing prominence for Washington policymaking: The White House said in July that American leadership in AI was the federal government’s second-highest budget priority for research and development, above American manufacturing, space exploration and medical innovation.
But military officials say AI could also revolutionize espionage, national security and the battlefield. In a June letter announcing the launch of the Joint Artificial Intelligence Center, a Pentagon hub for overseeing AI research across the military, Defense Department leaders said the technology “will change society and, ultimately, the character of war.”
The new projects would mark a significant jump in federal AI spending: The government spent more than $2 billion on AI research and development during the 2017 fiscal year, according to White House estimates of unclassified programs not including key Pentagon and intelligence budgets. The DARPA allocation would be on top of the ongoing spending.
The funding comes at a time of increasing U.S. caution over China’s national AI strategy. The Chinese government has invested heavily in multi-year technological campaigns aimed at supercharging a domestic AI market the country has projected could be worth $150 billion by 2020. Chinese tech giants such as Alibaba and Baidu work closely with the government on AI that could have applications for autonomous vehicles, health care and national security.
China has unveiled detailed guidelines and regulations on AI — some of which, U.S. researchers contend, echo a technological roadmap rolled out during the Obama administration — and used AI such as facial recognition to empower national surveillance and expand government control. Quoting China’s minister of science and technology, Chinese state media reported in May that the country wanted to lead the world in AI by 2030.
Founded to counter Soviet research during the Cold War, DARPA is credited with helping develop the modern Internet and the building blocks of commercial AI, including some of the first self-driving cars and early versions of Siri, the virtual assistant in Apple iPhones.
The agency sponsors research challenges and solicits proposals from researchers whose work would bring forth transformational technological leaps. It has helped fund the work of a vast range of scientists and engineers across academia and private industry — including Google’s founders, Sergey Brin and Larry Page, who both benefited from DARPA funding during their graduate work.
But workers at Google and tech giants have in recent months pushed back forcefully against collaborating with the government, military or law enforcement. Workers for Microsoft and Amazon, whose employees are developing facial-recognition software, have campaigned against the companies’ work with federal immigration officials and local police forces.
DARPA’s proposal offers have traditionally promised AI researchers a chance at tackling futuristic problems away from the commercial timelines and financial imperatives of private companies. And while researchers say the proposals have long appealed to senses of public service and patriotism, they are also increasingly offering AI experts the ability to work on issues of ethics, safety and privacy — the kinds of topics increasingly in vogue among Silicon Valley’s top AI minds.
Still, “this isn't a blue-sky place,” Highnam said. “Everything we do, even the fundamental work, we know why we're doing it. We work for defense.”