The phrase “fake news” has lost much of its meaning in the months since the election, as partisans have increasingly employed it to describe any news coverage they find unfavorable. But for the purposes of the Stanford/NYU study, “fake news” means “news stories that have no factual basis but are presented as facts.”
Researchers Hunt Allcott and Matthew Gentzkow wanted to know how widely stores like these were spread, whether they were believed, and whether they could have plausibly affected the results of a very close election. Their first task was to catalogue, to the extent possible, the universe of major fake news stories on the Web in the months before the election.
They settled on a list of fake news stories investigated by the fact-checking sites Snopes and PolitiFact, as well as a number of big fake stories collected by Craig Silverman of BuzzFeed. The list isn't comprehensive by any means — there's virtually no limit to the number of fake news stories circulating on obscure websites. But Allcott and Gentzkow said that they believe it's a solid tally of the biggest fake election stories — the ones most likely to be seen by readers and potentially have an effect on Nov. 8.
They found, first of all, that the 156 fake stories in their database had a decidedly “pro-Trump” slant (a category that included anti-Clinton stories).
“There were more than three times as many pro-Trump fake news stories” as there were pro-Clinton (or anti-Trump) stories, Gentzkow said in an interview. Not only that, but they found that the typical pro-Trump fake news story was shared more often on Facebook than the typical pro-Clinton story.
Those facts aren't terribly surprising. They essentially confirm the conclusions of Buzzfeed's investigation into fake news. They also jibe with what creators of fake news have said themselves: They found a more receptive audience among Trump supporters.
But then there's the question lurking behind all this: Did the fake news stories actually matter? Did they tilt the election one way or another? To answer this, Allcott and Gentzkow administered a national online survey in the weeks following the election. They asked more than 1,000 nationally representative respondents whether they remembered seeing — and believing — a variety of news stories.
They asked about a number of real news stories, as well as fake ones. Their top-line findings were encouraging: People were more likely to report seeing and believing the real news stories. About 70 percent of respondents reported seeing major real news stories (like the “basket of deplorables” comment or the FBI's announcement on “new” Clinton emails), and close to 60 percent believed them.
By contrast, only about 15 percent reported seeing fake stories on such topics as Pizzagate or the Pope's alleged endorsement of Trump. And only about 8 percent actually believed them.
Still, in a close election an 8 percent persuasion rate for fake news stories is nothing to sneeze at. The election itself was decided by only 107,000 votes, or 0.09 percent of the total electorate.
Here's the kicker, though. Survey researchers know that when you ask people to recall certain things — like whether they saw a given news story — you're introducing a certain amount of error into your data. People's memories are imperfect. They may not recall things that actually happened, or they may recall things that didn't happen.
Allcott and Gentzkow wanted to correct for this recall bias. So they also made up a number of nonexistent fake news stories — fake fake news, if you will — and asked respondents whether they remembered seeing or believing them. Think of these as placebo fake news stories. “This approach mirrors the use of placebo drugs as controls in clinical trials,” they wrote.
For these stories, they invented a number of headlines that never actually appeared anywhere. They were plausible, involving things like the candidates and voter fraud, or FBI secrets on the candidates. But they were wholly invented by the researchers for this survey.
Perhaps surprisingly, they found that the recall rate on the placebo, or “fake fake” headlines, was nearly identical to the recall rate of the actual fake news stories.
“The share of people who recall seeing the placebo stories is only slightly smaller than the share who recall seeing [actual] fake stories,” Gentzkow said. “That means that the raw survey is going to dramatically overstate the share of people who were actually exposed” to fake news.
To put it another way, 15 percent of people reported seeing at least one of the actual fake news headlines circulating before the election. But 14 percent also claimed to recall seeing a “story” that literally didn't exist in any form until the researchers asked about it.
When you calculate it all together, Allcott and Gentzkow estimate that only 1 percent of Americans were actually exposed to fake news before the election.
For their last trick, Allcott and Gentzkow did a back-of-the-envelope calculation of the persuasive power of these fake stories. Using a lot of complex math involving vote margins and numbers on the effectiveness of political advertising, they estimate that the average fake news story would have to be about 36 times as persuasive as the average political campaign ad for fake news to have tipped the balance of the election.
While that estimate relies on a lot of strong assumptions and some flat-out guesswork, it does provide a good ballpark estimate of the effect of fake news in 2016. Going on these numbers, the effect of fake news seems to be a lot smaller than many observers had initially feared.