We at Speaking of Science do our best to deliver you solid, sound science reporting. And that means that we spend a lot of time telling you not to believe what you read about "science." But just in case you haven't been paying attention, comedian John Oliver — host of "Last Week Tonight" — is here to school you.
And if you don't feel like laughing your butt off for the next 20 minutes, here's a rundown:
1. A single study means basically nothing.
You may notice that we use a lot of phrases like "more research is needed to confirm" and "it's hard to know for sure whether the researchers are right about" (and so on and so on). That's because science isn't some ironclad body of facts; it's a method for testing hypotheses and coming to conclusions about them.
When scientists design an experiment, carry it out, have it reviewed by their peers and accepted for publication in a reputable journal, all we know is that they probably didn't make their results up out of thin air. A lot of factors can influence the outcome of a study. The bigger the claim, the more skeptical you should be until other scientists — ones uninvolved in the first study — repeat the experiment and come back with the same results. Once we hit a critical mass of result reproduction, we might say that science has reached a consensus on something — the existence of human-driven climate change, for example.
This becomes especially dangerous when individual (read: meaningless) studies contradict the scientific consensus and get gobbled up by folks trying to confirm what they already believe to be true — that climate change isn't real or that vaccines cause autism, for instance.
Look for phrases like "adds to a growing body of research" if you want to know that sweet, sweet science is for real.
2. Statistics can lie.
There's this thing called a p-value that measures the strength of the evidence against the null hypothesis. In other words, it tests the significance of your data.
Let's say you have a study testing a connection between eating chocolate and sleeping more than eight hours a night. When you take your group of test subjects and compare their chocolate eating habits to their sleep habits, you have to crunch some numbers to make sure that champion sleepers are actually more common in the chocolate scarfing group than they'd be in any random population sample. You also have to "control" for different variables to make sure those things aren't controlling sleep (maybe kids eat more chocolate, and kids tend to sleep more), and all that statistical crunching gives you a measure of the significance of your results.
But scientists can manipulate their sample sizes and analysis to get good p-values when they shouldn't be getting them. That's another reason why it's so important to reproduce research ad-nauseam — to catch statistical trickery, intentional or otherwise.
3. The system isn't set up to support good science.
All apologies to good scientists doing good science, but the fact of the matter is that scientists have to support their research funding and their own employment — and a lot of times, doing good science isn't the best way to do that. Reproducing someone else's work, while incredibly important, isn't splashy or exciting. And these days, scientists know that making a splash in the media is almost as important as getting studies published in the first place. That means that reproduction falls by the wayside in favor of novel ideas. We love novel ideas, but they're not particularly useful until other scientists copy them.
4. The media is bad and we should all feel bad.
Once upon a time when I was a wee science-writing babe, I actually wrote an academic thesis on how the media handles science. My big takeaway was that it's like some hellish game of "telephone," where results get more and more garbled as they trickle through the media. I can't explain this any better than PhD Comics can, to be perfectly honest.
Now that I've been covering science for a few years, I have to amend my senior thesis a little bit: I used to think that bad, incorrect science reporting started with an outlet getting it a little wrong and everyone else following suit, building on that initial inaccuracy. But now I can confirm that even when you cross all your t's, dot all your i's and suck all of the wonder out of a scientific result by being really really clear on how little a study actually "proves," someone somewhere will still publish a story saying drinking wine is as good as going to the gym and link to your article as a source.
And the truth is that anyone, myself included, is capable of getting too excited about a particular study, or not really understanding it, or using a headline that sets off a string of bad coverage and misunderstanding from those who didn't read the whole article.
5. So what do we do?
Maybe stop getting your science news from outlets that keep saying "x causes z" — that should be a major red flag, because studies do not prove things like that or actually anything at all. But a lot of this comes down to common sense: Does something sound kind of crazy? If it does, you probably want to find out what experts outside of the study have to say about it. And if the coverage you're reading or watching doesn't provide it, take your business elsewhere.