washingtonpost.com
It's Not the Answers That Are Biased, It's the Questions
If Two Similar Studies Completely Disagree, Look at How the Funders Framed the Issue

By David Michaels
Special to The Washington Post
Tuesday, July 15, 2008

Wal-Mart and Toys R Us announced this spring that they will stop selling plastic baby bottles, food containers and other products that contain a chemical that can leach into foods and beverages. Even low doses of the chemical (bisphenol A, or BPA) are linked to prostate and mammary-gland changes in laboratory animals that were exposed as fetuses and infants. The big retailers are responding to the fears of parents, and Congress is considering measures to ban the chemical.

But is there enough evidence of harmful health effects on humans? One of the eyebrow-raising statistics about the BPA studies is the stark divergence in results, depending on who funded them. More than 90 percent of the 100-plus government-funded studies performed by independent scientists found health effects from low doses of BPA, while none of the fewer than two dozen chemical-industry-funded studies did.

This striking difference in studies isn't unique to BPA. When a scientist is hired by a firm with a financial interest in the outcome, the likelihood that the result of that study will be favorable to that firm is dramatically increased. This close correlation between the results desired by a study's funders and those reported by the researchers is known in the scientific literature as the "funding effect."

Having a financial stake in the outcome changes the way even the most respected scientists approach their research. Scientists make many decisions about the doses, exposure methods and disease definitions they use in their experiments, and each decision affects the result.

For instance, when assessing the risk of exposure to perchlorate, a rocket-fuel ingredient that can affect the thyroid and contaminates many water supplies, scientists on a National Academy of Sciences panel chose perchlorate's effect on thyroid iodine uptake as the most important indicator of its effect on health. On the other hand, scientists working for companies that might have to bear the costs of perchlorate cleanup selected the chemical's effect on one thyroid hormone as the basis of their risk estimation. These scientists estimated a safe level for perchlorate exposure nearly three times higher than that of the NAS scientists.

Within the scientific community, there is little debate about the existence of the funding effect, but the mechanism through which it plays out has been a surprise.

At first, it was widely assumed that the misleading results in manufacturer-sponsored studies of the efficacy and safety of pharmaceutical products came from shoddy studies done by researchers who manipulated methods and data. Such scientific malpractice does happen, but close examination of the manufacturers' studies showed that their quality was usually at least as good as, and often better than, studies that were not funded by drug companies.

This discovery puzzled the editors of the medical journals, who generally have strong scientific backgrounds.

Richard Smith, the recently retired editor of BMJ (formerly the British Medical Journal), has written that he required "almost a quarter of a century editing . . . to wake up to what was happening." Noting that it would be far too crude, and possibly detectable, for companies to fiddle directly with results, he suggested that it was far more important to ask the "right" question.

What Smith and other researchers, such as Lisa Bero of the University of California at San Francisco, have found is that industry researchers design studies in ways that make the products of their sponsor appear to be superior to those of their competitors. Smith, Bero and others have catalogued these "tricks of the trade," which include testing your drug against a treatment that either does not work or does not work very well; testing your drug against too low or too high a dose of the comparison drug because this will make your drug appear more effective or less toxic; publishing the results of a single trial many times in different forms to make it appear that multiple studies reached the same conclusions; and publishing only those studies, or even parts of studies, that are favorable to your drug, and burying the rest.

The problem is equally apparent in review articles and meta-analyses, in which an author selects a group of papers and synthesizes an overall message or pattern. Decisions about which articles to include in a meta-analysis and how heavily to weight them have an enormous impact on the conclusions. This was apparent in two different conclusions that came out of National Toxicology Program-sponsored reviews of BPA literature. Two independent expert groups made different decisions about including and weighting studies with particular exposure routes, and the groups expressed different levels of concern about the effects on prostate and mammary glands of fetuses and children exposed to low doses of BPA.

Who is surprised to learn that the funding effect is particularly strong in studies that look at the health effects of secondhand smoke and are sponsored by the tobacco industry? The cigarette manufacturers had initiated a special program to fund, publish and promote studies that found secondhand smoke harmless. When researchers at the University of California examined 106 review articles on this topic in the scientific literature, they found more than a third concluded that secondhand smoke was not harmful. Three-quarters of these dissenting reviews had authors who were affiliated with the tobacco industry.

It has become clear to medical editors that the problem is in the funding itself. As long as sponsors of a study have a stake in the conclusions, these conclusions are inevitably suspect, no matter how distinguished the scientist.

The answer is de-linking sponsorship and research. One model is the Health Effects Institute, a research group set up by the Environmental Protection Agency and manufacturers. HEI has an independent governing structure; its first director was Archibald Cox, who famously refused to participate in President Richard Nixon's "Saturday Night Massacre" meant to help cover up the Watergate scandal. HEI conducts studies paid for by corporations, but its researchers are sufficiently insulated from the sponsors that their results are credible.

David Michaels is an epidemiologist who teaches environmental health policy at the George Washington University School of Public Health and is author of "Doubt Is Their Product: How Industry's Assault on Science Threatens Your Health" (Oxford University Press).

View all comments that have been posted about this article.

© 2008 The Washington Post Company