The same infographic kept appearing in my Twitter feed again and again around Thanksgiving. The graphic, originally shared by Donald Trump, showed a series of statistics about race and gun deaths in 2015, alongside an image of a dark-skinned man with a handgun.
But here's the thing: The people I follow on Twitter weren't endorsing the bogus statistics — quite the opposite. News organizations shared the image along with links to their articles debunking it. Pundits shared the image to poke fun at Trump's credulity. Liberals shared the image along with their concerns that someone who would traffic in such fabrications could become president. (Trump, meanwhile, said the whole thing didn't matter: "All it was was a retweet.")
Even as they debunked and ridiculed the image, though, his critics continued to share it. Their intention in fact-checking Trump was to counteract the effect that these false statistics had on people's attitudes — but in sharing them, they may have done exactly the opposite. My research shows that even successfully corrected misinformation creates "belief echoes": effects on attitudes that persist even when you know that a piece of information is false.
A great deal of research has examined how and why people refuse to accept corrections. When it comes to politics, most people tend to engage in a process called "motivated reasoning," meaning that their existing attitudes (for example, their partisanship) affect what facts they choose to believe and which arguments they find convincing. Motivated reasoning is part of why many people incorrectly think that President Obama is a Muslim, that vaccines cause autism or that former president George W. Bush knew about the 9/11 attacks before they happened.
In some ways, belief echoes are even more insidious than motivated reasoning: Their existence suggests that even if we accept intellectually that a piece of information is false, it still has the power to affect how we think.
In my research, I conducted three online experiments with a total of 905 participants. Each study lasted about eight minutes and followed a similar format. Participants were randomly assigned to read one of three different versions of a news article from the fictional Iowa Ledger describing a fictional congressional candidate named John McKenna. The first version (or the “control”) simply described the campaign. The second version included a paragraph in which McKenna’s opponents accused him of accepting donations from a convicted felon. The third version also included the accusation, but it was immediately followed by a correction: specifically, that an independent investigation by the Ledger had shown the accusation to be false.
After reading the article, participants answered several questions about McKenna. Those in the control group (who never saw the misinformation) evaluated him significantly more positively than those who read only the accusation. However, in each of the three experiments, those in the third group (who read the accusation and the correction) also evaluated him more negatively than the control group.
Why did the people in the third group dislike the candidate, even though the article clearly stated that the accusation against him was false? One possible explanation is that some people simply did not believe the correction. After all, people often reject corrections of misinformation — especially when those corrections run counter to their existing beliefs. To test for this possibility, I asked all the participants a number of factual questions about the article, including whether McKenna had accepted donations from a felon. The results showed that the correction worked: People who read it knew that McKenna was innocent. But they still evaluated him more negatively than did participants who had never read the false accusation. Even when the correction worked, it wasn’t enough to “unring the bell” of exposure to misinformation. This remained true when the misinformation was corrected right away — even in the sentence immediately after the falsehood.
Belief echoes can arise through several processes. First, if the misinformation is vivid and emotionally affecting, it has a strong initial effect on attitudes. In contrast, the correction has a much smaller emotional impact. Participants’ opinions about McKenna were like a thermometer; while the accusation of misconduct caused the thermometer to drop precipitously, the correction did not cause a symmetrical rise.
The second process through which misinformation creates belief echoes is driven by our brains' instinct to create plausible causal narratives. In the few seconds after participants read about the accusation, their minds automatically went to work recalling facts that matched that narrative — even in the case of an imaginary politician. They may have remembered other sleazy politicians they'd heard of, or thought about their general dislike of Congress. After they learned that the misinformation was false, those memories remained, and they could continue to affect attitudes. For example, one participant wrote that even though they believed that the accusation was untrue, "it made me more suspicious of him — he might be covering something up."
The experiments also varied another factor: the party of the candidate. Half the participants were told that McKenna shared their partisanship, and half were told that he was from the opposing party. The results showed that belief echoes can cross partisan lines: Misinformation continues to affect how a person feels about a candidate even when they are both of the same party.
Belief echoes are not limited to the political world. Research in psychology, including several studies by Ullrich Ecker at the University of Western Australia and Stephan Lewandowsky at the University of Bristol, shows that effectively corrected misinformation can continue to affect attitudes in other domains as well. The existence of belief echoes means that if we want to minimize the impact of misinformation on attitudes, it is critical not to repeat it. Sometimes this might be unavoidable — for instance, fact-checking sites need to repeat the original statement in order to correct it. But when we spread a correction, whether it's through tweeting or conversation, we should do our best to avoid repeating the false information.
Belief echoes are more likely when the misinformation is vivid (for example, videos or images) and less likely when it is not (dry statistics and policy details). Unfortunately, this means that the times when we are most tempted to repeat misinformation — a horrifyingly inaccurate graph, an offensive comment in a debate — are also the times when it is most likely to create belief echoes.
So, as frustrated as we might be when Donald Trump makes things up on the campaign trail, the best advice may be to deal with him the same way we're told to deal with bees, small children throwing tantrums and Internet trolls: Just ignore him.
Twitter: @emilythorson