“Personally I think the idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way — I think is a pretty crazy idea. Voters make decisions based on their lived experience.”

— Facebook chief executive Mark Zuckerberg, in remarks at a conference, Nov. 9, 2016

Mark Zuckerberg’s statement did not age well.

At this point, it’s old news that Russia tried to influence the 2016 presidential election. Not long after the election, the Obama administration imposed sanctions on Russia, including the expulsion of Russian intelligence operatives. Then-FBI Director James B. Comey confirmed there was an open investigation into Russian interference in the 2016 election six months later. And Russian operatives were indicted in 2018. This year, the report by special counsel Robert S. Mueller III put it all in print: Russia used email leaks, propaganda and social media to stoke societal divisions and undermine the integrity of the election process in the United States.

Still, Russia’s use of strategic propaganda is part of a decades-old playbook. What is new is how cleanly, simply and effectively it was able to distribute false information, manipulate mainstream media and amplify existing divisions using social media platforms. However, 2016 was not the first election in which social media played a role — so what changed? Why were Russian operatives able to amplify their message so clearly? And what does that mean for the 2020 election?

Let’s dig in.

The Facts

When President Barack Obama was elected to a second term in 2012, social media was just becoming central to everyday interactions. Facebook cracked 1 billion users in October of that year. Google fielded more than 100 billion searches per month in 2013. Still, the companies did not yet have the kind of advertising capabilities or the reach they have today.

Arguably, that shift began in 2013. Google and Facebook acquired smaller companies, including advertising exchanges and other platforms like YouTube and Instagram, which expanded their reach. Facebook launched Custom Audiences and Lookalike audiences, which paired the characteristics provided by the advertisers with Facebook’s own algorithm. Essentially, they allow advertisers to target specific, individual users.

Starting in 2014, a Russian troll farm called the Internet Research Agency began to promote propaganda and target American voters with polarizing messaging. In many ways, the agency behaved like a savvy Internet marketer, using the same tools and techniques that are common in digital advertising campaigns.

“They would create campaigns on different platforms and target different subgroups using the data-targeting capabilities of those platforms,” said Dave Carroll, a professor of media design at Parsons School of Design. The agency iterated and evolved its targeting techniques. Ultimately, it developed what Carroll described as a “sophisticated understanding of who uses the platforms, what they use them for and what messages might resonate best on those platforms. And then how to use the targeted capabilities of those platforms to test their own messages and hone with greater effectiveness.”

At the same time, Russian military intelligence (GRU) pushed propaganda into the media landscape through what researchers refer to as narrative laundering. They planted the seed of a story, attempting to have it picked up and distributed by larger and larger media outlets. They would promote these stories through fake personas on social media, made-up think tanks and alternative news outlets.

The GRU also used a “hack and leak” strategy, whereby Russian operatives would hack entities such as the Democratic National Committee and Hillary Clinton’s campaign to leak the information to organizations such as WikiLeaks and to journalists. The content of these leaks was widely reported on, ultimately becoming a major national narrative of the 2016 election.

“What we have here is a multi-strategy, multithreaded approach to influencing and to dividing. And they are using the best tool at their disposal to do that. And that’s not always in coordination, but it potentially could be someday,” said Renee DiResta, technical research manager at the Stanford Internet Observatory and co-author of a recent report on GRU online operations.

By 2016, Russia had started more than 20 campaigns in 13 countries. Forty percent of these campaigns were on Facebook and nearly 90 percent were on Twitter, according to a report from Jacob Shapiro and Diego Martin at Princeton University’s Empirical Studies of Conflict Project. Shapiro and Martin reported that the campaigns often appeared across platforms, including on fake websites, Reddit, Instagram, WhatsApp and in Russian-controlled media. In other words, the campaigns worked just like a targeted digital advertising campaign. They were able to buy ads on Facebook from St. Petersburg in Russian currency and run them on Facebook.

“Basically the Russian operators had had free rein to do pretty well anything they wanted,” said Ben Nimmo, director of investigations at Graphika, which analyzes social media. That changed after these campaigns were identified. “Much more pressure has been put on the troll operations. They’ve had literally thousands of accounts shut down across various different platforms.”

As the social media networks start to crack down on Russian efforts, those efforts evolved and slowed, but they did not stop. Shapiro reported that Russia launched 12 new operations in 2017 and 2018.

Around the same time, Russian operatives shifted tactics. The number of bots, trolls and fake accounts declined significantly, while hashtag hijacking (where foreign actors take over authentic hashtags to promote inauthentic behavior), which had been used sparingly, remained constant. Rather than using false content to defame a candidate or persuade a voter directly, there were more efforts to polarize the online conversation by drawing on existing divisions.

Information operations also moved platforms. Shapiro found the total number of foreign election influence efforts declined in news outlets, on Twitter and on Facebook after 2017. While there were fewer operations on Instagram, YouTube and other platforms, those efforts remained steady and slightly expanded in 2017 and 2018. Still, most campaigns appeared across platforms. It is not clear whether the move to smaller platforms like blogs, 4chan and Reddit was a result of less regulation, shift in audience or simply a need for operatives to reach a certain number of views. After all, as Shapiro explained, operators at the Internet Research Agency may have “an output-based measure” that requires them to post a certain amount of content.

Nimmo argued that these shifting tactics could also be a signal that the tech companies’ efforts are succeeding. “The whole point [of a disinformation operation] is to stand out. If you’re trying not to get detected, you won’t have the same success in getting attention,” he said.

Still, unlike traditional political advertising, there are no new laws or policies that govern digital political advertising. Rather, any clear changes from 2016 have originated within the technology companies, not the government.

In October, Twitter announced that it would ban political advertising from the platform but reportedly struggled to define it. Regardless, it’s unclear how this policy will affect inauthentic behavior on the platform. Generally, foreign actors have used automated accounts, or bots, and inauthentic coordinated activity to amplify a hashtag.

Facebook has launched new strategies to prevent foreign interference in future elections and improve transparency, including updating the authorization and verification process for buying ads. Facebook has successfully identified and removed networks of accounts, pages and groups from Russia and Iran that had engaged in coordinated inauthentic behavior.

While acknowledging the continued threat, Facebook’s Head of Security Policy, Nathaniel Gleicher, said: “Each time we take down one of these campaigns, we examine the behaviors these actors use and then we deploy other tools to make those behaviors much more difficult in scale.”

However, foreign actors have found new ways to work around Facebook’s authentication processes. They recruit people who are from and living in the country they are targeting to post or share content that ordinarily the foreign actor’s accounts would have spread. For example, a Russian operative might try to convince an American to knowingly or unknowingly share Russian propaganda. By doing that, they can ensure the content makes it into the conversation without risking Facebook taking down the account or page. Foreign operators have also continued to target legitimate journalists with strategic content in an attempt to push their narrative into the wider media ecosystem.

Facebook also created a political ad archive and added a “paid for by” disclaimer for political and social issue ads to increase transparency and prevent foreign actors from directly purchasing political ads. But it does not remove factually incorrect ads posted by politicians or candidates, which some have argued counteracts Facebook’s efforts at transparency.

The Bottom Line

Russian operatives weaponized social media, using services and techniques that were designed by technology companies for advertisers. They co-opted traditional media by sharing hacked information and spreading sensationalized stories through fake online personas. They updated long-standing propaganda tactics with inauthentic behavior on social media and in traditional media to reach voters in the digital era.

These actions will continue by Russia and others — including possible domestic actors — for the foreseeable future. Facebook, Google and Twitter have taken steps to combat disinformation operations and build in more transparency for political advertising on their platforms.

Still, there is no new legislation to govern political digital advertising, and there is no question that digital advertising will be a force in the 2020 election. (Since May 2018, Google and Facebook have sold nearly $1 billion worth of digital ads.) The question is whether a new start-up, Russian or otherwise, will seek to exploit new vulnerabilities across government, journalism and social media in the 2020 election that have not been identified yet or have not yet been addressed.

The video above is part of a YouTube series from the Fact Checker. To catch up on past episodes, and not miss future ones, subscribe here.

Send us facts to check by filling out this form

Sign up for The Fact Checker weekly newsletter