If you listen to Facebook defend itself, its argument boils down to the following: We’re doing our best. We know about the problems on our platform, and we’re doing all we can to address them. “I don’t think anyone can claim we haven’t taken a lot of exceptional measures to meet those very exceptional circumstances,” Facebook Vice President Nick Clegg asserted last month on “Meet the Press.”
As a point of fact, however, when it comes to misinformation on its platform, Facebook is not doing all it can. There are several measures the company could take to confront the problem that, so far, it has refused to take. These measures aren’t exceptional. Some of them — such as proactively putting clarifying fact checks directly in front of users who have previously been exposed to misinformation — are, in fact, quite simple. And our research makes clear that, when it comes to tackling misinformation, these measures can work.
Here’s how Facebook typically treats misinformation and fact checks at present. Imagine that a friend of yours shares a post with a false claim on Facebook — for example, a post greatly exaggerating the risks of the coronavirus vaccines. If the post reaches a certain level of popularity, Facebook will share the post with fact checkers. With Facebook’s financial support, those fact checkers will evaluate whether the post is accurate or not. If the fact checker determines that the post is inaccurate, Facebook will then append the verdict to the original post and link to the fact-checking website on which the verdict appears.
The academic research is clear: Fact checks reduce belief in false claims. People who see fact checks hold more accurate views as a result. This holds true across party lines and ideologies. It’s true when Democrats see fact checks of false claims by Democratic politicians, and it’s even true when Trump supporters see fact checks of false claims by Donald Trump.
So far, so good. But here’s where the company chooses not to act. Even though you saw the initial misinformation, Facebook won’t necessarily put the subsequent fact check in front of you. The fact-checking verdict is only guaranteed to live on adjacent to the original post. Unless you actively seek out your friend’s post again, you may never see the fact check that declared the post to be false. And so you may very well go about your life, believing that the vaccines are more dangerous than they actually are.
If it chose to do so, Facebook could make sure that, if you saw a post later fact-checked as false, you would see the fact check. But right now — even though it is well aware that you were exposed to a false post — Facebook doesn’t do that.
Recent evidence we’ve accumulated underscores the effectiveness of proactive fact checking. In a paper published in September in the Proceedings of the National Academy of Sciences, we describe experiments we conducted with fact-checking organizations around the world. In four countries — Argentina, Nigeria, South Africa and Britain — we randomly showed survey respondents fact checks to popular pieces of misinformation in their country. By randomizing, we are able to isolate the effects of fact checks on factual beliefs.
In each country, fact checks made people more accurate, effectively leading them to reject the misinformation. In three of the four countries, we were able to recontact our respondents two weeks later to see if the effects of the fact check remained detectable. Indeed, they were. Around the world, fact checks reduce belief in misinformation — and durably so.
Fact checks can do the same on Facebook, too. In another paper, we describe results from experiments we conducted on a striking replica of the Facebook News Feed. In these experiments, carried out with the support of Avaaz, a civil society group, we brought survey respondents onto our Facebook replica and randomly exposed them to misinformation and/or fact checks. Across a wide array of issues, touching on newsmakers ranging from former president Donald Trump to Rep. Ilhan Omar (D-Minn.), the fact checks worked. They made people more accurate, reducing belief in misinformation to which they had previously been exposed. Fact checks work about as well on our replica of Facebook as they do elsewhere around the world.
Given the clear evidence that fact checks reduce false beliefs — and the fact that Facebook already helps underwrite the production of fact checks — why doesn’t the company increase the visibility of fact checks on the platform? To be sure, scaling up fact checks to meet the scope of the misinformation challenge wouldn’t come cheap.
But CEO Mark Zuckerberg claims to care about the social consequences of the company he oversees. If that company truly cares about the cause of a well-informed public, it should bear the cost of bringing fact checks in front of the people who would benefit most from seeing them — the people who have been exposed to misinformation.