Last Tuesday, a headline from the Associated Press sparked outrage in the ordinarily quiet world of science policy. The Environmental Protection Agency, the story suggested, was considering relaxing guidelines for low-dose ionizing radiation, on the theory that “a bit of radiation may be good for you.” Within hours, the AP had issued a correction. As it turned out, the EPA was not, after all, endorsing hormesis, the theory that small doses of toxic chemicals might help the body, much like sunlight triggers the production of vitamin D.
Instead, the EPA was doing something much scarier: It was holding hearings on the “Transparency Rule,” which would restrict the agency to using studies that make a complete set of their underlying data and models publicly available. The rule is similar to an “Open Science” order issued by the Interior Department last month, and incorporates language from the HONEST Act, a bill that passed in the House in 2017 but later stalled in the Senate. The HONEST Act originally required that scientific studies provide enough data that an independent party could replicate the experiment — which is simply not realistic for large-scale longitudinal studies.
Although these rules cite the need to base regulatory policy on the “best available science,” make no mistake: They aim to strangle access to reputable studies.
The Transparency Rule continues the Trump administration’s pattern of anti-science policies. The White House’s Office of Science and Technology Policy is a ghost town, with most of the major positions, including the director’s post, vacant since January 2017. Agencies and departments across the board, including the State Department and the Agriculture Department, are dropping their science advisers and bleeding scientific staff. It’s getting harder and harder for federal rulemakers to access expertise.
Understanding what’s wrong with “transparency,” at least as defined by these policies, requires a closer look at how scientists work. Let’s say you’re trying to understand the health effects of a one-time, accidental release of a toxic chemical. This incident might be epidemiologists’ only chance to investigate how this particular chemical interacts with both the air and the humans who breathe it, at varying doses, over a period of time. No matter how careful your approach, your study would fall short of the replicability standard. You wouldn’t have baseline health information for the specific people who happened to be in the area. You might not have information on which residents had air filtration systems installed in their homes, or which residents were working outside when the incident took place. Your early results would, by definition, reflect only short-term health outcomes, rather than long-term effects. And you couldn’t replicate the study (with better controls) without endangering the health of thousands of people. In such cases, scientists have to extrapolate from existing, sometimes imperfect, data to protect the public.
Epidemiologists have community standards, including peer review, to evaluate these kinds of studies. A careful, peer-reviewed study of this hypothetical incident might well represent the “best available science” on this particular chemical. Regulators might rely on this study to establish the permissible levels of this chemical in the air we breathe. But now, let’s also say that this study took place 30 years ago. The leading scientists involved are dead, and no one kept their files. The raw data are, effectively, lost. Should scientists at the EPA be blocked from using the study?
Despite what made last week’s headlines, the EPA’s Oct. 3 hearing went beyond radiation. In fact, its lead witness, University of Massachusetts toxicologist Edward Calabrese, barely mentioned his theory of radiation hormesis. Instead, his testimony argued that the EPA should no longer rely on linear no-threshold (LNT) models for any number of hazards, including toxic chemicals and soil pollutants. In toxicology, LNT models assume that the biological effects of a given substance are directly connected to the amount of the exposure, with no minimum dose required. Radiation protections standards are based on LNT models; so are basic regulations involving ozone, particulate pollution, and chemical exposure.
The original studies asserting a LNT model for low-dose ionizing radiation were conducted in the 1950s. Like our hypothetical epidemiologist investigating a toxic chemical release, the geneticists who tried to understand the biological effects of atomic radiation were working with imperfect data, much of which is no longer available. The concept of a “comprehensive data management policy” simply did not exist in 1955. These particular studies were primarily based on survivors of the atomic bombing of Hiroshima and Japan. The scientists also extrapolated from high-dose exposure data in fruit flies and mice and from unethical high-dose experiments conducted on humans.
These studies are imperfect, but focusing on their limitations misses the broader scandal. These studies took place during the heyday of atmospheric nuclear weapons testing, an era when both the United States and the Soviet Union were pumping the atmosphere full of radioactive nucleotides. Some of the areas near the testing zones received so much radiation that they are still uninhabitable today. The tests coated the entire planet with a scrim of radiation. The Atomic Energy Commission, the agency in charge of the United States’ nuclear weapons program, didn’t even attempt to investigate the potential health effects of this constant, low-dose exposure to ionizing radiation on the world’s population. Studies of low-dose radiation were expensive, inconvenient, and politically risky, potentially jeopardizing the weapons testing program and therefore the United States’ ability to fight the Soviet Union. From the government’s perspective, it was better not to know.
This week, a sensational headline distracted us from a broader crisis. Without government support for research of environmental hazards, the public’s health is left to either the whims of industry researchers, who have a strong incentive to play down their dangers, or to public advocacy groups, which are too easily smeared with charges of anti-industry bias. The “transparency” movement supposedly resolves this crisis of authority by giving the public access to the underlying data on which science is based, but it ignores the power dynamics that determine which research questions get asked, and why and how they’re answered.
In the past, Americans looked to their federal science agencies and science advisers to resolve these sorts of disputes. But a few weeks ago, the EPA announced that it, too, would be eliminating its Office of the Science Adviser. With the science offices empty, who will decide?
There is one bright spot in all of this: On Sept. 28, bipartisan legislation authorized the Energy Department to restart its low-dose radiation research program. But what about the other pollutants that the EPA supposedly regulates? Who will produce the kinds of science deemed acceptable under the “transparency” rule?
“Transparency” has become another way to cultivate institutional ignorance. Americans deserve better from the agencies that are supposed to protect them. In the case of environmental hazards, what you don’t know can hurt you.