(iStock)

A year ago last week, researchers from Drexel University released a study about the benefits of “sexting” in relationships, which included a figure that suggested that the vast majority of adults — around 82 percent — had sexted at some point in the past year. As expected, the surprising statistic was widely covered online, including by CNN, the Los Angeles Times, the Chicago Tribune, Slate, the Huffington Post and here at The Post.

What was rarely mentioned in these media articles, though, was that the research had not been published in any academic journal. Instead, the data was compiled through an Internet survey as part of a presentation to the American Psychological Association’s annual convention. Sure, the results were interesting, but the research is simply not generalizable to the entire public.

Unfortunately, examples like this are legion in the world of science journalism. As a result, the scientific community has lately been making an effort try to end the stream of misleading articles — going so far as to redesign the way academic journals review and publish studies.

Part of the problem is that there’s a lot of adverse incentive for people to distort scientific studies. Science and health media writers are constantly in need of new, sexy studies (preferably ones that somehow mention “sex” in the headline). Meanwhile, scholars and academic journals face pressure to produce work that gets attention from media outlets — doing so can elevate the stature of their research, which in turn promotes their funding. At the same time, researchers have become very good at playing with data — such as shifting the length of their experiments or picking and choosing which variables to control for — in order to come out with the results they want. (FiveThirtyEight has a great tool that allows you to play with different economic variables to show that the economy had statistically done better under both Republicans and Democrats.)

In between, media agents for research institutions have become adept at turning complicated scientific jargon into compelling press releases — usually at the expense of accuracy. Reporters crop down those releases even further, stretching, exaggerating and torturing academic papers until their original meaning of the study has been completely lost.

Without a doubt, the rising demand for more studies has taken a toll on science’s credibility. In the past decade, researchers have been debating ways to free their work of so-called “publication bias,” including “preregistration” or “results-free” peer review. In this concept, scientists submit their work to academic journals and peer reviewers without the results included. That way, academics would only be allowed to review the methodology and the questions posed at the onset of each study. Theoretically, journals would free themselves of the tendency to only publish papers with exciting findings.

Problem solved, right? Well, it’s a good idea, but it’s not perfect. The academic journal Comparative Political Studies recently pulled together a special issue made up entirely of these “results-free” submissions. In an essay reviewing the resulting papers, CPS editors highlighted some major pitfalls: First of all, studies don’t always go as planned. In peer reviewing methodology first, scientists risk becoming too rigid in their experimental design, making it difficult to carry out their study as promised if they have to adapt to unexpected variables.

Secondly, the results-free model seems to favor some study designs over others — such as quantitative over qualitative designs. There’s also the legitimate concern about what happens if methodologies come out with null results — that is, if the only result a study can produce is to prove the hypothesis incorrect. Such a paper could end up being extraordinarily boring or not answering the essential question at issue. Imagine a headline like: “We don’t know which gene puts you at a greater risk of depression, but we’re pretty sure it’s not the gene we thought it was.”

That’s not to say the “results-free” model is worthless — it does have potential. In fact, the editors lauded the model for incentivizing researchers to focus on theory and research design. The problem is that it simply doesn’t solve all the problems facing scientific publication.

What’s more, it shouldn’t be up to scientists to fix them.

As the CPS essay shows, scientists can only reform themselves so far; a lot of the blame must be put on reporters and the general public. The main problem with scientific studies is not how they are conducted; it’s how they’re consumed. Both the general public and members of the media alike tend to treat studies as if they’re infallible. If a newspaper or a politician cites a newly published scholarly work, rarely do we ever hear someone challenge it.

In all honesty, the best way to challenge scientific findings is simply to find the time and read the original study. Evaluate the methodology for yourself. Are there legitimate limitations to the research? Does the sample size seem large enough? If at any point the answers to these questions seem way over your head and the long gobbledygook of equations looks like another language, try Googling it. Check out other articles on the topic, or simply start with the basics.

The unfortunate reality is that some scholarly research cannot be simplified without giving up essential nuance. The general public can’t blame science for being too hard — it can only blame reporters for not having the intellectual rigor or, more likely, the time to work through the difficult questions.