The Washington PostDemocracy Dies in Darkness

The tricky incentives of social science impact

What happens when the mania for having impact warps the incentives of social scientists?

A man walks past a mural in favor of same-sex marriages in Dublin on Thursday. Ireland goes to the polls this week to vote on whether same-sex marriage should be legal, in a referendum in which the Yes side has based their strategy on a now-discredited political science paper. (Paul Faith/AFP/Getty Images)
Placeholder while article actions load

I opine at The Washington Post, which means that, as a general rule, I’m trying to affect the public sphere. In an academic world where it’s all the rage now to talk about “impact,” this is thought to be a good thing. Scholars, grant-givers and pundits are constantly bemoaning the gap between the academy and everyone else, be it policymakers or the public. And they have urged social scientists, repeatedly, to bridge that gap and make their voices heard.

But I’m beginning to wonder if scholars need to start having another conversation: What happens if we have too much impact?

If academics try too hard to demonstrate impact in their research, the incentives can get skewed. The social world is a ridiculously messy and complex place, but generating results that say “it’s complex” or “it’s complicated” or “it really depends” puts most audiences to sleep. The way to make policymakers, the public and even fellow academics sit up and take notice about research is if the findings are counterintuitive and significant. Social scientists dream of getting this kind of result. The problem comes if the dream causes them to fudge the findings.

Which brings us to the current Michael LaCour scandal over an article he co-authored with Donald Green in the pages of Sciencewhich showed a massive effect on public attitudes about marriage equality when individuals interacted with gay canvassers (see the Monkey Cage’s Andrew Gelman and Will Moore for more).  Long (and still developing) story short, it seems extremely likely that LaCour faked the data, hoodwinking his co-author, his dissertation adviser and pretty much everyone else in the process — except the grad students who tried to replicate his findings.

[Read: Co-author disavows highly publicized study on public opinion and same-sex marriage]

The original Science article generated a lot of press and had genuine impact — as Kieran Healy notes, it was the template for the ongoing Yes Campaign in Ireland for a referendum legalizing same-sex marriage.

This scandal highlights the incentive structure within the academy that can lead to problems like this. First, as Healy notes:

As a social scientist I worry most about the quality the frauds we don’t spot. Science is often bitterly competitive but it depends on honesty. It is not set up to weed out liars. Imagine what research, or talks, or conferences would be like if you had to routinely question not simply the quality or competence but the actual honesty of speakers. The same goes for supervision. Consider having to check not just the quality of your grad students’ work, but whether they were lying to you about their data. Much of what we do would become simply impossible.

Healy is correct here: There has to be some degree of trust for peer review and mentoring to function with any kind of efficiency. There are ways of promoting transparency — making data available and so forth — but there are also limits to this. In some ways, if LaCour did what he stands accused of doing, then it’s hard to detect prepublication, because it’s so self-defeating in the long run. As Don Green told New York magazine’s Jesse Singal:

I obviously have gotten along very nicely with Michael, and we have been friendly. But my puzzlement now is, if he fabricated the data, surely he must have known that when people tried to replicate his study, they would fail to do so and the truth would come out. And so why not reason backward and say, let’s do the the study properly?

Combine this difficulty with the increasing incentive to have impact. As Bloomberg’s Megan McArdle notes in her take:

We reward people not for digging into something interesting and emerging with great questions and fresh uncertainty, but for coming away from their investigation with an outlier — something really extraordinary and unusual. When we do that, we’re selecting for stories that are too frequently, well, incredible.
As anyone who’s actually reported [or studied] a long, complicated issue can tell you, the world rarely offers you the kind of story your colleagues are waiting to hear and cheer: a straightforward, simple narrative with obvious conclusions. Yet we continue to pay the most attention to those who provide those narratives — and so, we shouldn’t be shocked when some of those people turn out to have delivered by being credulous, or fraudulent….
And if we want fewer false stories from the media and academia, it’s no mystery how to do that: We need to reward people for rigorous investigations of interesting questions, not for finding the incredible. Unless we do that, we shoudn’t be too astonished when we occasionally learn that we’ve been stumbling around in the dark.

As I’ve argued in the past, I don’t think social science has that big a problem with making its voice heard in the public sphere. Indeed, this week has been shot through with ongoing academic debates that have significant real-world consequences. So maybe it’s time that we start debating how to think about the effects of that impact on the larger social science enterprise. Social scientists should strive for impact — but only after they strive for doing quality social science.

Loading...