Danielle Citron, a 2019 MacArthur Fellow, is a professor at Boston University School of Law. Geng Ngarmboonanant is a student at Yale Law School.

Early in the Trump presidency, senior officials pursued an “Extreme Vetting Initiative,” an automated system that would scour social media data to predict whether an immigrant would commit crimes. The project drew fire as soon as it became public: Computer scientists said such a predictive system was impossible, and lawyers said it would not only chill privacy and speech but also could serve as a “digital Muslim ban.” The idea was abandoned.

That cautionary tale shows us that public oversight of any expansion of surveillance is crucial, particularly during a national crisis. Last week, Politico reported that presidential adviser Jared Kushner is talking with health technology companies about creating a “national coronavirus surveillance system.” That system would provide a “near real-time view of where patients are seeking treatment and for what, and whether hospitals can accommodate them,” helping the government allocate resources and determine where to reopen the economy. The data collection would purportedly cover 80 percent of the United States. An aide to Kushner pushed back against the Politico article after it was published, calling it “completely false.”

We know that large-scale health monitoring is necessary and shows tremendous promise when accompanied by vigorous oversight. For example, the Centers for Disease Control and Prevention runs the National Syndromic Surveillance Program, a partnership among federal, state and local health departments that securely tracks patient symptoms in emergency departments, providing early warning of public health threats such as flu outbreaks, vaping-related lung disease and opioid abuse.

But a large-scale system hastily built from the ground up in the throes of a crisis, particularly one run directly out of the White House, warrants serious caution. Health data is among the most sensitive information about individuals. It can carry heavy social stigma (think of HIV/AIDS or mental health diagnoses) and reveal intimate preferences, habits and decisions, including those involving pregnancy. Health status should be used only as a method for social control — to restrict physical movements — if the public health payoff is substantial and if the new surveillance system is subject to exacting oversight.

As a first step, the administration must be forthright about its plans. Before amassing enormous reservoirs of personal data, the White House should explain why it plans to build its own system rather than improve the existing CDC surveillance program or, alternatively, help states build up regional monitoring systems. But, so far, the White House has operated in secret. To the best of our knowledge, it has ignored a Freedom of Information Act (FOIA) request made by the Electronic Privacy Information Center and letters seeking details sent from several senators.

The public deserves to know what personal data will be collected, used and shared; how government will ensure that it will not be misused for non-pandemic purposes; how long personal data will be kept; and, crucially, whether the privacy invasion is necessary and proportionate to the benefits.

The White House should not be permitted to proceed unless it can provide evidence that the system will in fact work as advertised. The technology should not become a smokescreen for political decisions, such as potentially prioritizing aid for red states at the expense of blue states. Nor should its algorithms undermine public health measures by using flawed data that can compound inequities. The White House must show how it will protect Americans’ privacy, because computer scientists have found ways to reassemble anonymized data and reveal personally identifiable information.

More than a decade ago, one of us (Citron) wrote an article warning of “reservoirs of danger” — the vast tranches of ultra-sensitive data collected in the Information Age that, in the wrong hands, can lead to serious harms to ordinary people. The disclosure of private information — including health status, preexisting conditions and intimate preferences — can lead to economic harms, such as higher life insurance premiums or even lost employment. Shared with immigration officials, the data could enable targeted deportations. Leaked information could result in stigma or abuse; a movement to publicly identify a “Patient Zero” has already resulted in destructive cyber-harassment.

Current law is probably not up to the task of reining in this project. The Privacy Act of 1974, the main law governing governmental handling of personal data, covers federal agencies but not the Office of the President, and in any case, personal data can be used and shared for “routine uses,” which can justify a breathtaking amount of data sharing. The Health Insurance Portability and Accountability Act of 1996, the core health information law, is also riddled with loopholes. It generally does not apply to newer health technology companies. Experience has shown that the government can access data reservoirs from private companies through coercion or promised benefits.

Public input is crucial right now. Once this system is created, it will be too late. Litigation or FOIA requests after the fact will take months to resolve, and by then, societal norms will have become entrenched.

This is not the time for technology optimism or pessimism. It’s the time for technology realism, with the full understanding that technology’s promise is as only good as those who control it — and that once the pendulum moves in one direction during a crisis, it is difficult to swing it back.

Read more: