Michael Chertoff is executive chairman of the Chertoff Group, chair of Freedom House and a former secretary of homeland security. N. MacDonnell Ulsch is the founder of Ulsch Cyber Advisory, guest lecturer at the U.S. Military Academy and a Boston College research fellow.

The Chinese Communist Party’s persecution of the Uyghur people will go down in history as one of the worst human rights tragedies of our time — not just for the abject horror of targeting a population of 11 million for genocide, but also for the advanced technologies that enabled it.

Like most Chinese citizens, the Uyghurs have long been under constant high-tech surveillance that tracks, analyzes and records their every move and scours their personal communications for evidence of dissent. Compounding this culture of surveillance is the evolution of artificial intelligence from a novelty designed to win games of chess against humans into a science now capable of facial recognition and individual profiling. The Uyghurs have lived in China since the 9th century, yet their persecution has been driven by 21st-century technology.

Beijing has vowed to lead the world in AI, and its documented use in the identification and detention of Uyghurs shows that the regime is getting there quickly. The implications of this campaign are dire. A new study from the Newlines Institute for Strategy and Policy offers “clear and convincing” evidence that the repression of Uyghurs goes beyond detention and political indoctrination to ethnic cleansing, not only through death in “internment camps” but also by means of forced abortions and mass sterilization.

The most alarming known application of AI in the Uyghurs’ home region of Xinjiang is so-called predictive policing, a disturbing marriage of dogmatic ideology, advanced technology and utter disregard for due process and the rule of law. Predictive policing is not a purely Chinese phenomenon, but an increasingly global one. At its heart is a belief that AI has the potential to make our cities and communities safer by identifying social trends that enable early intervention by law enforcement. But that is not how predictive policing works in practice, and especially not in China.

The ministries of Public Security and State Security — the Chinese government’s main law enforcement and intelligence organs, respectively — work hand in hand with state-owned enterprises specializing in surveillance technology, such as the defense manufacturer China Electronics Technology Group Corporation (CETC). As early as 2016, there were reports that the Chinese Communist Party had directed CETC to develop software that could aggregate and analyze data on individuals’ jobs, hobbies, consumption habits and other social behaviors to predict terrorist acts before they occur, a concept best known from dystopian science fiction.

While CETC’s targeting and analysis systems have been invaluable to the government’s efforts to monitor and detain Uyghurs, Chinese president and Communist Party leader Xi Jinping is evidently also determined to tap into the cutting-edge AI research and development done in other countries such as the United States, Britain, Norway, France and India. In 2020, a Chinese investment firm acquired an equity stake in Jina AI, a German start-up that uses deep learning to conduct extensive, highly scalable audio, text and video searches. In 2016, China-based Ant Group acquired a U.S. biometric security company that uses images of the eye to authenticate mobile devices, although recent reports indicate that Ant might soon divest that acquisition in light of increasing U.S.-Chinese tensions.

These companies, and others already working with the Chinese government, must be held accountable for contributing to the Xi regime’s ongoing human rights violations. The United States and its allies must respond to companies that enable this genocide by blocking export of AI-enabling technology or through imposed sanctions.

A failure by the United States and its allies to act could allow the Chinese party-state to continue to improve its repressive AI-based technology, persecuting religious and ethnic minorities, and exporting homegrown methods of repression even more aggressively than it does now. Such a scenario can and must be avoided.

One response is for the United States to organize a coordinated effort to restrain the Chinese government’s ability to further develop AI for its predictive policing program — for example, by bolstering protections against intellectual property theft in this area, enacting punitive sanctions to discourage private technology companies from collaborating with Beijing, and publicly and forcefully decrying the complicity of such companies in the human rights catastrophe in Xinjiang.

To be successful, such an effort would need bipartisan support in Washington, to win cooperation from democratic partners around the world and to persuade the private sector through laws and regulations to act in its own long-term interests. Action on this scale is necessary and urgent to curb the Xi regime’s worst authoritarian instincts and minimize the human cost of its oppressive rule.

Read more: