Scientific researchers often fret about “publication bias” — the fact that positive results are more likely to get published than negative ones. It’s exciting to report that a dietary supplement might sharpen your memory. It’s boring to hear a lifestyle change will do... absolutely nothing.
Last month, I learned about a publication that has been quickly gaining popularity, the Journal of Negative Results in BioMedicine (JNRBM). Published, presumably, by a gang of dour curmudgeons who hate everything, JNRBM openly welcomes the data that other journals won’t touch because it doesn’t fit the unspoken rule that all articles must end on a cheery note of promise. (“This could lead to new therapies!” boast most journal articles, relying on the word “could” to keep their platitudes accurate and the exclamation point to boost excitement, stand for “factorial,” or make a clicking sound, depending on your field.)
It’s a neat concept, though, admittedly, the JNRBM doesn’t always make for thrilling reading. (Here’s a sample article title: “The female menstrual cycle does not influence testosterone concentrations in male partners.”) Plus, there’s always the worry that too many journals of negative results could bias researchers in the other direction, toward inconclusive experiments. As Ruben puts it: “You can get published even when the experiment fails — it’s the easiest way to pad your CV since the invention of 1.25-inch margins.”
Still, those are minor concerns. Correcting publication bias is a difficult task. And a curmudgeonly journal devoted to highlighting hours of laboratory toil that ends up concluding... nothing... is a perfectly good place to start.
(Link via John Sides.)