Why were they so late?
The New York City hospital said it delayed its reporting so that it could complete related medical journal articles.
Oof! This connects to another problem we’ve discussed on this blog and elsewhere, which is the push to publish articles and the push to promote.
Here’s another example from Piller’s investigation:
In 2009, the nonprofit Hoosier Cancer Research Network terminated a study of Avastin in 18 patients with metastatic breast cancer. The drug didn’t help and caused trial volunteers serious harm — including hypertension, gastrointestinal toxicity, sensory problems, and pain. But the Indianapolis-based network, which runs trials under contract for drug companies, did not report the results as required the following year . . . In 2011, the FDA revoked its approval of Avastin for breast cancer after determining that it was ineffective for that use and posed life-threatening risks. The Hoosier Network researchers finally published the data in a medical journal in 2013. They have yet to post results on ClinicalTrials.gov — nearly six years after the legal deadline.
I have to say that I don’t always get reports done on time myself. Once a report is late and I’m not hassled about it, I might not get around to filing it either. The real issue to me is not paperwork, but rather the system of publication and publicity that revolves around hype. We’ve seen lots of examples recently, such as the Excel error of Reinhart and Rogoff, the himmicanes and hurricanes study, the paper by Tol with almost as many errors as data points, and so on and so forth.
Consider the recent confusion regarding the supposed rise in death rates among middle-aged white Americans. In this case, it was possible to sort things out because the data were all publicly available. After adjusting the data by age, it turned out that there was an increase in death rates among middle-aged white women in the South, but not much happening elsewhere. See here for background.
My point in bringing up this story is that it worked the way science is supposed to. A research paper (by economists Anne Case and Angus Deaton) was published, others looked at the data and made some corrections, and we all moved forward.
And this was all facilitated by the Centers for Disease Control and Prevention, which had the relevant data in easily downloadable form. For a story that has not worked so well, consider the recent battle in which the authors of a controversial paper on chronic fatigue syndrome continue to refuse to release their data.
Journals often reject papers about small studies or trials stopped early for a range of reasons, such as a drug causing worrisome side effects. They publish relatively few negative results, although failed tests can be as important as positive findings for guiding treatment. ClinicalTrials.gov tries to fill these critical information gaps and serve as a timely and comprehensive registry.
We have to move beyond the attitude that scientific truth comes in journal articles. And this relates to our recent discussions of research transparency.
I’m upset that all these research organizations aren’t sharing their data, and I’m glad that Piller has reported this, which I hope will spur the government into enforcing open-data requirements.
If we want to do evidence-based medicine and evidence-based policy — and I think we should — then we all need access to the evidence, for three reasons.
First and most directly, when data are available, outside investigators can follow up and formulate and test their own hypotheses.
Second, open data make it easier for people inside and outside the academic research establishment to find flaws and oversights in published work.
Third, the requirement of openness gives researchers an incentive to do cleaner research, knowing ahead of time that others will be looking over their shoulders.