A live demonstration using artificial intelligence and facial recognition is seen at the Horizon Robotics exhibit at the Las Vegas Convention Center in January. (David Mcnew/AFP/Getty Images)

TECHNOLOGY COMPANIES have been taking flak lately for collaborating with the U.S. military — but are they helping the Chinese government expand its surveillance state at the same time? The revelation this month that Microsoft researchers collaborated with academics at a Chinese military-run university sparked outrage from China hawks. Careful consideration would be a more prudent response.

The Financial Times reported April 10 that specialists at Microsoft Research Asia published three papers over the past year with co-writers affiliated with China’s National University of Defense Technology, controlled by the country’s Central Military Commission. Critics say the research could aid China in repressing its citizens, not to mention in throwing its Uighur minority into reeducation camps. Microsoft counters that the projects aim to solve artificial intelligence conundrums that academics around the world are working on together, and that the technologies have no closer relation to surveillance than WiFi or a Windows operating system.

The truth may be somewhere in between. Artificial intelligence research has always been characterized by global collaboration, and the default for academies and companies alike is to make their findings and code available to the public so that others may build on what they have discovered. Because machine-learning technologies are usually dual-use, the same discoveries that could, say, help doctors detect skin cancer could also allow a repressive regime to track its civilians. Distinguishing a technology’s military applications from its civilian ones is always tricky in the AI space. It is trickier still in China, where the line between private and public is institutionally blurred.

The Trump administration’s plan to restrict technological exports to China will almost certainly mean keeping some sensitive products and services away from that country, or at least imposing licensing rules. But it could also mean barring certain types of research. The Microsoft case is interesting because it falls in exactly the gray area officials will have to confront: The United States benefits immensely from the open exchange of ideas in AI — including access to top talent found in the most prestigious Chinese universities. Stopping up that pipeline would be a mistake. But China is also an egregious human rights offender, and using AI research for military gains is at the core of its stated strategy.

One answer is to impose bright-line rules, such as a ban on U.S. companies collaborating with any Chinese entity directly affiliated with the military. NUDT, the university Microsoft is under fire for collaborating with, has a history of valuable international cooperation — but its subsidiary is also building two organizations devoted explicitly to helping the People’s Liberation Army harness tech. Another, more flexible response is to mandate that companies closely review any Chinese collaborations for human rights concerns, assessing the collaborator’s relationship to the government as well as how directly the research could be related to a nefarious end.

The current model of near-limitless cooperation with Chinese firms and researchers may need rethinking. Still, officials in the United States eager to fire at companies for foundational work on groundbreaking technologies should take care to avoid shooting themselves in the foot.