The Trump administration’s interference may be egregious, but the CDC’s vulnerability to such obtrusion is rooted in something more fundamental. Since the 1960s, the CDC has guided public health measures by estimating risk, balancing one risk against another and making recommendations to the public on how to minimize risk.
This mode of operation is by now woven into the fabric of public health expertise in the United States and worldwide. Yet in its very nature, risk assessment reflects political and value choices and involves trade-offs. Risk assessments demonstrate what costs — in terms of illness, economics or other societal harms — are acceptable for what perceived benefits. No matter how objective they attempt to be, agencies working in risk assessment are vulnerable to political manipulation because they must make choices about costs and benefits, and such choices are inherently political.
As far back as the 1970s, federal agencies adopted the term “regulatory science” to describe their work. This signaled that these agencies were going to offer apolitical, objective assessments of myriad risks to the public, rooted in the best science possible.
These assessments would determine levels of exposure to danger or “cutoffs” that would trigger regulatory action to protect the public. The hybrid term — “regulatory science” — expressed confidence that the tools and models of risk assessment could be as objective and trustworthy as the findings of basic scientific research. This promise emerged from a decade of revelations. These ranged from the thalidomide tragedy — in which a drug widely used in pregnant women turned out to cause profound birth defects, a calamity narrowly avoided in the United States — to revelations about environmental risks from pesticides, and new movements pushing to protect both consumers and the environment from industry-related harms.
But determining what level of risk was tolerable required inherently political, value-laden decisions that frequently pit different risk-assessment models against one another.
Consider, for example, when the CDC faced the possibility that a deadly pandemic was afoot in 1976. An army cadet at Fort Dix died of the flu, and 200 other cadets had become ill. Laboratory studies revealed that the strain was related to the virus that killed tens of millions of people in 1918.
Yet different groups of experts had dissimilar assessments of the risk at play. The U.S. Public Health Service advised vaccinating only those at high risk, taking the position that the risk of a larger epidemic could be managed and mitigated. The Advisory Committee on Immunization Practices (ACIP) at the CDC, however, advised President Gerald R. Ford to vaccinate the whole population. This group of experts took the position that the uncertainty was too great and that immediate precautionary action should be taken.
The ACIP prevailed and 45 million people were immunized in 10 weeks. In 1968, the Hong Kong pandemic flu had killed over 100,000 Americans, and Ford, facing reelection, did not want to take the chance that a similar scenario would unfold on his watch. Although political considerations played a role — as they do in all assessments of risk — the Ford administration’s assessment was not an unreasonable interpretation of the evidence at hand.
But vaccinations also come with risks, and soon reports emerged of severe side effects from the 1976 flu vaccination, including a disorder known as Guillain-Barre syndrome that can cause paralysis. Moreover, the actual rates of flu transmission cases were low. The threat of a pandemic petered out.
The New York Times deemed the situation a “fiasco,” blasting the White House and Congress for lacking “sufficient sophistication in medical problems to be able to put biological reality before political expediency.” Public trust in the CDC also declined as a result. This case underscored that erring on the side of caution in responding to a public health risk is not always the right choice and may undermine trust in public health expertise. It also reminds us that even in the absence of the egregious political interference characteristic of the Trump administration, the calculus of assessing and mitigating risk is inescapably political.
What level of precaution to take was also the question in the tug-of-war around the CDC’s school reopening guidance issued over the summer, which downplayed the risk of coronavirus spread and emphasized the harms of keeping schools closed. But the debate was never about whether reopening schools was important. Rather, it was about how this could be accomplished safely. Would it be possible to properly estimate and therefore mitigate the risks involved, or was the uncertainty too great?
According to reporting by the New York Times, the pressure on the CDC was exacerbated not only by political appointees at the Department of Health and Human Services (HHS), but also by Deborah L. Birx, an esteemed infectious-disease specialist and the White House’s coronavirus response coordinator. Birx adopted a different model of risk assessment, supported by the opinion of another expert agency — the Substance Abuse and Mental Health Services Administration (SAMHSA). By the very nature of their expertise, the professionals at SAMHSA were especially sensitive to the downstream mental health harms caused by school closures, such as isolation, neglect and parental unemployment.
So while there was certainly something extraordinary about this incident — the political pressure campaign orchestrated by HHS — there was also something very ordinary about it. Namely, it was a dispute about risk assessment. Such disputes, and their occasional weaponization for political expediency, are inescapable when it comes to regulatory science.
Regulatory science was supposed to depoliticize risk awareness, to present the public and policymakers with an objective, apolitical resolution to such disputes. Yet it became another tool in political struggle. This vulnerability predated the Trump administration, and it will outlast it.
To reduce this vulnerability to politicization, the relationship between policymakers and expert agencies — essentially producers and users of risk-assessment models — could be reorganized. “Protecting” the CDC from political interference is a tempting goal given the current debacle, but while it could isolate the agency from criticism, it also threatens to isolate it from alternative points of view and inputs from other sources of expertise.
Learning from history, a wiser reorganization would create a robust and professional mediator between the producers and users of models. This mediating agency would be charged with integrating input from a broader network that includes academic, private and nonprofit experts, and allows these views to be incorporated into a set of alternative scenarios to be presented to policymakers. These scenarios will take into account not only the estimates of alternative models, but also the complications posed by uncertainty, ignorance, indeterminacy and the need to secure the public’s trust.