A commission that advises Congress on competition in artificial intelligence is considering recommending the U.S. government join other democratic nations to pool data to counter China’s advantage, according to Robert O. Work, a former deputy defense secretary and a leader of the U.S. National Security Commission on AI.

“We’ve met so far with the United Kingdom, the European Union, Japan, Canada and the Australians,” said Work, ahead of the commission’s interim report to Congress Tuesday. “A lot of people say China has an advantage, because it has so much more data. But by aggregating all of the democratic nations and working together, we feel that we can offset any problem in that regard.”

He said the commission intends to talk with other countries, as well. Data is used to train powerful AI systems to recognize words, objects or sounds in the way that an adult teaches a child to recognize things by telling the child what they are. Massive amounts of annotated data are required to train AI systems, and the Chinese government collects more data than any other country.

The United States and China are strategic competitors in many arenas, but none is more critical in shaping the future of the world than the competition for dominance in AI. AI will probably determine which country wins in the economic realm and, in turn, in the national security realm.

In 2017, China announced its goal to become the world leader in AI by 2030. The United States responded by creating the commission to review America’s competitive position and to advise Congress on what steps are needed to maintain U.S. leadership. Former Google chief executive Eric Schmidt and Work were chosen from among 15 appointed commissioners to lead the group. The focus of the commission is on AI for national security in both the public and private sectors. But AI-enhanced economic superiority is a national security concern, Work said.

Schmidt and Work spoke about the challenges that the United States faces in winning support from a skeptical private sector, and in maintaining engagement with China while ensuring it doesn’t work to the United States’ detriment.

“There’s somebody on the horizon who is different in values from us who is quite capable,” Schmidt said. “We should do whatever it takes to make sure that the U.S. wins in this space.”

Since the commission was formed earlier this year, it has been gathering facts and compiling an overview of U.S. government activity on AI. The second phase, which will conclude a little bit more than a year from now, will draw up recommendations.

The interim report lays out seven consensus principles and five lines of effort that will guide the commission’s next phase. The report gives 27 initial judgments on areas the commission believes require more attention or action, such as pooling data from democratic nations.

On another hot-button topic, the report said lethal autonomous weapons are an important area for study but did not make a recommendation for action. The commission plans to meet with the Campaign to Stop Killer Robots, the International Committee of the Red Cross and other civil society groups to understand their objections to these types of weapons. China is developing such weapons.

While artificial intelligence advances ricochet around the world at the speed of the Internet, Work noted that what the Defense Department worries about is China integrating emerging technologies in a way that would give them a battlefield advantage.

The challenge, Schmidt and Work noted, is drawing on the expertise of Chinese nationals in U.S. research while protecting the national interest. They said they had heard clearly from the U.S. research community that Chinese nationals are important for American research at institutes and universities.

“We are not taking a position on decoupling versus entanglement,” Schmidt said. “At the end of the day, there’s a balance of engagement, disengagement, access to information, foreign students, so forth and so on, and we’ll come up with what we think is the best way through that path.”

The men addressed the wariness that some companies express toward working on AI-powered weapons programs, following Google’s 2018 pullout from Project Maven, a Pentagon effort to use computer vision to identify combatants in drone videos.

“The decision by Google to remove itself from Project Maven was viewed by some people as a canary in the coal mine, and people were worried that it would cause a broader stampede of private-sector innovation away from the government when it comes to AI,” Work said. “Thankfully that never happened.”

He said the Defense Department has since spent a lot of time explaining to the private sector how it intends to use AI and how the department will ensure its ethical use.

Craig S. Smith is a former correspondent for the New York Times. He is the host of the podcast Eye on AI.