But a new paper out of Stanford University suggests that this narrative might be overblown. The study — published last week in the Proceedings of the National Academy of Sciences — might be the most thorough review of the research on this topic to date. Over three years and with an enormous research team, the authors pored through more than 3,000 papers that examine questions of publication bias, covering 22 scientific disciplines.
Their conclusion? The evidence on widespread publication bias is murky. About 27 percent of the bias effects measured in the reviewed papers can be explained by the fact that meta-analyses — that is, research about research — often include findings from small sample sizes, making it easier to find statistically significant results.
What’s more, much of the evidence on publication bias is itself biased. Peer-reviewed journals are more likely to publish papers reporting large bias problems than more moderate effects, the Stanford team found. Researchers are more likely to cite these papers as well.
The point is not that publication bias doesn’t exist; it’s that we’re not really sure how much of a problem is it. In fact, we’re probably greatly overestimating the amount of bias in the academic world due to the same flaws in research that are causing bias in the first place.
Are there specific fields in which bias is particularly egregious? The Stanford researchers don’t go into detail on this question in their paper, but Daniele Fanelli — senior research scientist at Stanford and lead author of the study — said the most well-established bias appears exactly where you might expect it.
“Many of these biases seem to increase moving from physical sciences to social sciences,” Fanelli said.
In other words, there’s probably much more bias in psychology, economics and sociology than there is in chemistry or molecular biology. For people who follow the controversies of academic research regularly, this should be no surprise. The social end of the science spectrum is notorious for publishing questionable research, even in the most well-respected journals.
But there’s reason to be optimistic. A number of innovative proposals have been floated and tested in the research world to try to overcome bias, such as data sharing and “results-free” peer review. And while some ideas might not work in every field, we might at least consider whether more disciplined research principles borrowed from the physical science realm could help social science and economic fields. The American Enterprise Institute, for example, recently announced that it is pre-registering how it will analyze data on the minimum wage, which is probably a good idea for research institutions to consider doing with other controversial topics more generally.
“We are on the verge of a revolution,” Fanelli said, adding that new computational technology is making it easier for researchers to see their flaws and that the push for greater transparency in research could be an impetus for solutions. “This will bring about new possibilities for science. … We want to be able to share scientific information in a way that was not possible a few years ago.”
The world of academia gets a lot of flack from the general public — oftentimes rightfully so. But no one can say that there isn’t a concerted effort underway to thoroughly analyze and address issues of bias.