Democracy Dies in Darkness

PostEverything | Perspective

Science educators need to talk about the identity of scientists

Like many others, I fell for Brian Wansink’s bad science. Here’s how we can do better in the future.

By Alan Levinovitz

September 24, 2018 at 6:00 AM

Brian Wansink speaking at a TEDx conference. (Rosenberg, Eli/)

It’s been painful to watch the fall of Brian Wansink, a Cornell University marketing professor whose work on the psychology of food consumption has had an outsize impact on academics, policymakers, the general public — and me. His widely cited research helped inform the Obama administration’s Smarter Lunchrooms program, and his bestseller “Mindless Eating” laid out simple fixes for “mindlessly losing weight,” like sitting farther from the buffet line, which were backed up with rigorous scientific studies.

But in 2016, outside researchers began to scrutinize those studies and found what appeared to be serious problems. A BuzzFeed investigation uncovered emails between Wansink and junior researchers encouraging them to torture data until it yielded conclusions that would “go virally big time.” Before long, a cadre of critics had discovered inconsistencies that led to the retraction of 13 papers.

Like so many others, I once loved Wansink’s approach: the clever use of bottomless bowls to test subjects’ soup consumption patterns, the admirable focus on improving children’s eating habits through simple branding tricks. In my first book, which he blurbed, I discussed his influential work on “health halos,” which showed how low-fat labels license people to eat more and how organic labels make the same food taste better. These studies aren’t among the 13 that have been retracted by major journals for apparent data tampering, but I no longer trust them. How could I, when it now seems as though the person who produced them cared more about fame than facts?

More important, I no longer trust myself. I take pride in being a steely-eyed skeptic, wary of too-good-to-be-truths. Yet my critical apparatus was hijacked by Wansink’s apparent altruism and his alignment with my own beliefs about the power of branding, especially when it comes to labels such as “all-natural” or “organic.”

Other than his work, there was little evidence to back up the concept of health halos, and I should have presented them as speculative. Instead, I accepted them as established fact and presented them that way to my readers, misleading them just as I was misled. It was a stark reminder of how science really works. In theory, the scientific method is objective. But in reality, science is produced, interpreted and reported by humans — humans who are fallible, biased and self-interested.

Related: How bad science can lead to bad science journalism — and bad policy

In the wake of the Wansink scandal, there have been renewed calls for reforming the methods and culture of scientific inquiry: open data to allow for outside verification of results, pretrial registration so researchers can’t sift through results to come up with post hoc conclusions. The intense pressure of academia’s “publish or perish” mantra is no longer seen as an engine of discovery, but rather a possible enemy of honest inquiry. These are important changes, on par with requiring researchers to provide financial conflict-of-interest statements.

But there is an equally important change that needs to happen far earlier, when students are learning about scientific inquiry in middle and high school. When I was a child, scientific knowledge was presented to me as though it came from a big book of Important Truths. These truths were either disembodied (how many of us learned who Hans Adolf Krebs was?) or the product of singular genius. Darwin came up with evolution! Einstein came up with relativity! Experiments, properly designed and executed, always reproduced these truths. If I did it right, my experiment would reconfirm something — the boiling point of water, say. An experiment that didn’t discover something or yielded a result inconsistent with what we already knew? That was a bad experiment.

At no point in my science education did I learn about fraudulent, biased or mistaken scientific conclusions. At no point did my science textbook mention how racism led to scientific certainty about “inferior” races. There was nothing about the funding of scientific inquiry, no speculation about the effects of that funding. In class, we discussed the scientific method, yes, but the scientists tasked with executing it and the communities tasked with underwriting their work were rendered invisible.

The great historian of science Steven Shapin underscores the ridiculousness of these omissions in the sardonic subtitle of his book “Never Pure: Historical Studies of Science as if It Was Produced by People with Bodies, Situated in Time, Space, Culture, and Society, and Struggling for Credibility and Authority.” Shapin’s point is that science does not exist in isolation from the people who produce it and that, therefore, any understanding of science that doesn’t also include those people is fundamentally deficient.

It’s as if we were to study aircraft safety exclusively as a function of engineering, never giving a thought to the pilots or to the business executives that might be cutting corners on the production line.

Related: We need more scientists in the public square

Reforms to the culture of science need to be accompanied by reforms in science education. Textbooks should include case studies of how industry funding can skew results. The standard suite of experiments should include at least a few meant to illustrate confirmation bias. Statistical tricks such as post hoc generation of conclusions from a large data set are not difficult to understand, and they should be laid out clearly as cautionary tales.

Perhaps I and many others wouldn’t have trusted Wansink’s studies had we received that kind of education — and perhaps Wansink himself would have been less tempted to fudge data to begin with if he knew he was addressing a savvier audience.

We have been taught that the attributes of a good scientist are morally neutral: intellect, determination, creativity. But science is a human enterprise, and good science requires moral virtues. A good scientist is not merely smart or hard-working — she is also an honest scientist, a generous scientist, a scientist who encourages colleagues and invites criticism, who does not fear being proved wrong.

These qualities are not tangential to the production of scientific truth; they are essential to it, no less than equations or beakers. STEM education needs to emphasize moral virtues for what they really are: key features of the scientific method.

In the quest for objective science, we have bracketed the more subjective aspects of scientists’ humanity. Hours spent in the lab, mathematical ability, publication record — these are objectively verifiable quantities, unlike “goodwill,” “generosity” and “openness to criticism.”

It would be nice if subjective personal virtues weren’t a part of what it means to be a good scientist. It would make things less messy and keep science neatly isolated from disciplines such as history and philosophy. But reflecting on Wansink’s fall, we should remember that what we want to believe — what’s easiest to believe — isn’t necessarily true.

Insisting on believing it anyway? That’s the opposite of good science, and good scientists and science educators should lead the fight against it.


Alan Levinovitz is associate professor of religious studies at James Madison University.

Post Recommends
Outbrain

PostEverything | Perspective

Science educators need to talk about the identity of scientists

Like many others, I fell for Brian Wansink’s bad science. Here’s how we can do better in the future.

By Alan Levinovitz

September 24, 2018 at 6:00 AM

Brian Wansink speaking at a TEDx conference. (Rosenberg, Eli/)

It’s been painful to watch the fall of Brian Wansink, a Cornell University marketing professor whose work on the psychology of food consumption has had an outsize impact on academics, policymakers, the general public — and me. His widely cited research helped inform the Obama administration’s Smarter Lunchrooms program, and his bestseller “Mindless Eating” laid out simple fixes for “mindlessly losing weight,” like sitting farther from the buffet line, which were backed up with rigorous scientific studies.

We're glad you're enjoying The Washington Post.

Get access to this story, and every story, on the web and in our apps with our Basic Digital subscription.

Already a subscriber?