Correction: An earlier version of this blog post misspelled Nate Silver’s last name. This version has been updated.
The revelation that President Trump’s former campaign chairman Paul Manafort shared polling data with a suspected Russian agent is a political bombshell. But it also reminds us just how many gaps remain in our understanding of Moscow’s efforts to intervene in the 2016 presidential election.
Luckily, we don’t have to wait for the outcome of special counsel Robert S. Mueller’s investigation to learn more.
By now it’s a well-established fact that Moscow tried to interfere in the 2016 vote. We know that Russian intelligence agents hacked into computers of the Democratic National Committee and Hillary Clinton’s campaign chairman and orchestrated the release of corresponding documents to compromise the Clinton campaign. We know that Vladimir Putin’s operatives in the Internet Research Agency (IRA) organized a large-scale social media campaign designed to influence voters in a variety of ways.
Yet we still don’t know whether Russia’s efforts made the difference. Given just how narrow Trump’s margin of victory was — less than 80,000 votes in three key swing states — it stands to reason that any help he received from Moscow could have helped him to win. But we can’t be sure — because we don’t know how many American voters were actually persuaded by Russian information operations to change their votes (or to stay away from the polls altogether).
Isn’t it time we tried to find out?
Many commentators seem to assume that we’ll never be able to know. But Sinan Aral, a professor at the Massachusetts Institute of Technology, says that’s misguided. “When I read in the newspaper that it’s impossible to know that the Russians changed the results of the election, I vehemently disagree,” he told me. “It is possible to know, with a certain degree of statistical confidence, the likelihood that Russian interference changed the results.”
It’s extremely hard to do, he warns. But if we can marshal the will, we can get much closer to the truth.
Social scientists and analysts are still debating the impact of the Russian intervention. Some — including statistician Nate Silver and political scientist Brendan Nyhan — argue that the effects of social media campaigns are exaggerated and that the Kremlin’s efforts were too modest and too unfocused to make a conclusive difference.
Others, such as John Kelly of the data-analysis firm Graphika, insist that the Russians were sophisticated enough to tailor their messages to key groups — such as African Americans, who were bombarded with social media posts designed to demotivate them from voting. “The IRA had dozens of accounts that were followed by large fractions of the specific online communities they sought to penetrate — quite successful in influencer marketing terms,” Kelly told me in an email. (He was one of the authors of a report on the 2016 election commissioned by the Senate Select Committee on Intelligence.)
But just how successful were they at changing voter behavior? How many potential Clinton voters did they persuade to pull the lever for Trump? Did they convince some voters to sit out the election altogether?
Take, for example, those three crucial swing states — Pennsylvania, Wisconsin and Michigan. As Philip Bump wrote about Clinton shortly after the election: “But for 79,646 votes cast in those three states, she’d be the next president of the United States.”
Aral says he and his colleagues want to study the Russian influence campaign in precisely this geographical context. The MIT scholars have developed a robust methodology for assessing how social media campaigns influence the behavior of their targets — and now they want to bring it to bear on the Russian meddling in 2016. “We need a rigorous, scientific postmortem on Russian misinformation to harden our democracy against future attacks,” he told me. “While current analyses focus on Russia’s reach, what we’re missing is an analysis of their impact – who their misinformation targeted and what effect it had.”
Aral and his MIT research partner Dean Eckles sent me what they call a “blueprint” for such a study. They propose zeroing in on the issue of “causality” by analyzing how different levels of disinformation changed behavior and opinions. They would use randomized experiments to estimate shifts in voter turnout and voting.
“For example, Facebook and Twitter constantly test new variations on their feed ranking algorithms, which cause people to be exposed to varying levels of different types of content,” they write. “One underpublicized A/B test run by Facebook during the 2012 U.S. presidential election caused users to be exposed to more ‘hard news’ from established sources, with effects on political knowledge, preferences, and voter turnout.” Given access to adequate data, the researchers claim they can estimate the impact of the Russian influence campaign in Michigan, Wisconsin, Pennsylvania and Florida “with 95% to 99% confidence.”
To conduct such a study properly, we’d probably need far more information from the social media platforms than they’ve been willing to release so far. (We still don’t know everything that Facebook and Twitter know, for example.) And it certainly wouldn’t hurt to know more about how the Russians did their targeting and any of the help they received on that front from outsiders. (Manafort?)
To be clear, we don’t need to do this to determine whether Trump colluded; the Mueller investigation has already revealed plenty on that score, and there’s sure to be more to come. The point is to get a more precise understanding of how online campaigns affect our real-world behavior — something we’re only just beginning to confront. We need to know for the sake of the future of American democracy.