Last year, there was a big scandal in political science when a much-publicized paper was retracted on suspicion of fraud. The paper, “When contact changes minds: An experiment on transmission of support for gay equality,” by Michael LaCour and Donald Green, reported on an experiment that purported to show that a brief doorstep conversation with a political canvasser could cause big changes in attitudes toward same-sex marriage. That paper received major and unskeptical media coverage, including a segment on the radio show “This American Life.” It was a big deal, because social science’s understanding had been that political persuasion is difficult.

The scandal came because the study was faked: The data had suspicious patterns that first author LaCour could not explain, and co-author Green retracted the paper. LaCour also falsified claims about the funding and organization of the study.

In the meantime, though, David Broockman and Joshua Kalla, two political scientists who were involved in uncovering the original fraud, conducted their own follow-up study. It was just published in Science, the same journal that published, then retracted, the LaCour and Green paper.

Here’s what Broockman and Kalla found in their field experiment:

A single approximately 10-minute conversation encouraging actively taking the perspective of others can markedly reduce prejudice for at least 3 months. We illustrate this potential with a door-to-door canvassing intervention in South Florida targeting antitransgender prejudice. … 56 canvassers went door to door encouraging active perspective-taking with 501 voters at voters’ doorsteps. A randomized trial found that these conversations substantially reduced transphobia. … These effects persisted for 3 months, and both transgender and nontransgender canvassers were effective. The intervention also increased support for a nondiscrimination law, even after exposing voters to counterarguments.

This new paper looks reasonable to me, and it’s also helpful that they follow good research practices (for example, Broockman provides replication data for most of his research projects). So, yes, it looks like an open-hearted conversation really can change minds. At least in some circumstances.

Also helpful is Betsy Levy Paluck’s thoughtful overview article that appears in the same issue of Science, where she writes:

What do social scientists know about reducing prejudice in the world? In short, very little. Of the hundreds of studies on prejudice reduction conducted in recent decades, only ~11% test the causal effect of interventions conducted in the real world. Far fewer address prejudice among adults or measure the long-term effects of those interventions. … As the authors acknowledge, these strong results in the wake of a brief intervention might seem surprising. But readers may find it even more surprising that so few previous field studies have tested the causal effect of any type of intervention, aimed at any type of prejudice. … Broockman and Kalla’s results thus do not represent a new challenge to an established field: They stand alone as a rigorous test of this type of prejudice reduction intervention.

In particular, Paluck’s article serves as a bit of a rebuttal to that “This American Life” segment, which was called “The Incredible Rarity of Changing Your Mind,” where host Ira Glass said:

There’s this thing called the backfire effect. It’s been documented in all kinds of studies. It shows that when we’re confronted with evidence disapproving what we believe, generally we just dig in and we believe it more. And the rare times that people do change, it’s slow.

But maybe there is no “backfire effect,” maybe that’s just one more bit of incorrect folk wisdom from the psychology literature. Or, at least, such a backfire effect did not seem to apply to Broockman and Kalla’s canvassers who were so effective in persuading people to support transgender rights. At the very least, this suggests a domain-specificity of persuasion and backfire effects.

Glass also said this: “[Don] Green says he and his colleagues have read 900 papers. And they haven’t seen anything like this result — anyone who’s changed people’s views and it lasted like this.” But maybe, as Paluck writes, it’s not that there were 900 papers showing that persuasion couldn’t work; rather, there were 900 irrelevant papers.

My point here is not to criticize Glass, who was reporting on a celebrated paper published in a peer-reviewed journal, but rather to follow Paluck and challenge the idea that there was pre-existing literature finding that persuasion couldn’t be done.

When I discussed the LaCour and Green study a few months ago, I expressed skepticism about why anyone should bother studying this. I wrote, “Ulp. There are lots and lots of studies people are interested in doing, and I’m sure this activist group in Los Angeles has a long to-do list. Do you really think they should spend their precious time, money, and human resources to study an idea that is contradicted by an entire 900-paper literature and whose only claim to plausibility was a made-up experiment??”

It looks like Broockman and Kalla have proved me wrong.

What issues are susceptible to this sort of persuasion? I’d also like to resurrect an idea I brought up in the discussion of the original (fraudulent) LaCour and Green paper, which is the idea of “pushing at an open door.” My argument then was that, given the surprisingly large and persistent (claimed) effects of the persuasion, perhaps this was happening in part because these are fluid issues.

Could a similar intervention change attitudes on an issue, such as abortion policy, in either direction? Maybe not, given that aggregate attitudes on abortion have been steady for decades. Transgender rights, though, that’s a new issue — even the word “transgender” is new — and attitudes are moving fast. So it’s possible that the people persuaded by this intervention are people who were going to shift, and this persuasion just happened to be what did it for them.

I also want to point out a change of focus from LaCour and Green to Broockman and Kalla. It is my impression that LaCour and Green presented their result as a big surprise, as a big step beyond what was expected from the political psychology literature, and they seemed to be attributing much of their (claimed) success to particulars of their intervention, especially the idea that the canvasser was describing his or her own personal experience. In contrast, Broockman and Kalla don’t seem to be saying that their intervention has any special sauce; rather, it’s just a high-quality focused persuasion effort.

The other thing to think about these results is their implications on politics. Paluck writes,

Broockman and Kalla’s results thus do not represent a new challenge to an established field: They stand alone as a rigorous test of this type of prejudice reduction intervention. The authors combine a rigorous field experiment with long-term, high-quality measurement of its outcomes. Their exciting methodological template is now available to other investigators, allowing them to test how canvassing interventions affect prejudices and political attitudes.

This is all fine, but as a political scientist, I’m less interested in the impact of this research on other research than on its impact on politics. One question is scalability. It might be easier to do this intervention on a few hundred people than on many thousands of voters. The second question is what happens when many organizations are out there doing door-to-door persuasion. After all, if one organization can afford to do this, then others should be able to try it, too, and then you’d have to see diminishing returns.

So there are lots of interesting questions here. This new study by Broockman and Kalla is changing how we think. At the very least, we should be skeptical of claims, such as the backfire effect, and careful about what the literature on persuasion really says. I like how Paluck in her article connects to the larger questions in the psychology literature, and I’m interested in the political implications as well.