George Johnson writes in Tuesday’s New York Times about the difficulties of replicating and reproducing scientific findings, including key insights that the medical community continues to rely upon. I don’t think there is anything new in the article for those who have followed this (see for example this Economist article). Yet, the article once again raises the issue of how science corrects its errors.
Johnson refers to an archive created by the journal Nature titled “Challenges in Irreproducible Research,” which contains very good articles about the need for reproducibility and developing best practices for it. Similarly, the most recent issue of Political Science and Politics also has a terrific symposium on the various challenges concerning replication for political science (unfortunately access is gated but available to all members of the American Political Science Association). The symposium is especially strong in that it develops best practices in standards of transparency for both qualitative and quantitative work.
While I am excited to see so much attention for this issue, there remain serious problems with incentives. One among these is the difficulty to publish corrections, critiques, failures to replicate, findings of irreproducibility or null-findings (which all involve somewhat distinct issues). Here is what Andrew Gelman wrote a few days ago:
[..] for example, I recently submitted a methodological criticism of a paper to the American Sociological Review, but then when they rejected it (not because they said I was wrong but because they said they only have the space to publish a small fraction of their submissions, and I don’t think corrections of previously published papers in their journal get any sort of priority), I just wrote up the story in Chance, which is fine, Chance is great, but nobody reads Chance. And of course I did not attempt to publish a letter in Psychological Science for each of their flawed papers (that would be a lot of letters!), nor did I bother writing a letter to PNAS regarding that horrible, horrible cubic polynomial fit leading to the implausible claim that a particular sort of air pollution is causing 500 million Chinese people to lose an average of five years of life.
I strongly suspect that this is true across the (social) sciences. Journals prefer to publish new findings that are likely to be cited a lot, which maximizes the “impact factors” that are generally used to create rankings. Replications, null-findings, or reports of failures to reproduce published results are unlikely to draw much citation action, although there are notable exceptions to this. So we see some publications like this but many fewer than you would expect given the likely size of the problem. This gives authors poor incentives to put much effort into reproducing results or writing corrections to published results.
At this point I want to highlight that a new open access peer-reviewed journal for which I am one of the editors, Research and Politics (published by Sage), explicitly invites such submissions. Yet, although we have received many excellent manuscripts (the first of which will be published in the next few months), we are not yet getting the replication studies, null findings, corrections, and so on. So, let me renew this call and let me stress that we welcome replications of and corrections to both qualitative and quantitative research. Given the increased attention for this issue, it is not unreasonable to hope that the professional rewards for allocating effort toward this will increase, too.