Trevor Davis is chief executive of the digital intelligence firm CounterAction and a research professor at George Washington University. Matthew Hindman is an associate professor of media and public affairs at GWU and author of “The Internet Trap.” Steven Livingston is director of the Institute for Data, Democracy, and Politics at GWU.
In recent weeks, Facebook has faced a $5 billion Federal Trade Commission fine and may be a target in a newly opened federal antitrust investigation. On the fight against election interference, though, Facebook chief executive Mark Zuckerberg used a recent investor call to take a victory lap. “Recent elections in the [European Union] and India show that our efforts are working,” Zuckerberg said, and “thanks to our efforts, the elections were much cleaner online.”
Can Facebook really prevent a repeat of the kind of election meddling that Russia conducted in 2016, and that dozens of other countries and nonstate groups are now trying to replicate?
Our research suggests the answer is no. Data we gathered from October 2018 through May 2019 show that large-scale manipulation of Facebook is still possible and that big networks of fake accounts operated undisturbed even during European parliamentary elections almost three months ago.
Russia’s meddling in the 2016 U.S. presidential election relied on Facebook being asleep at the switch: Posts from fake Russian accounts reached 126 million users during the campaign, and Russian spy agencies used Facebook to organize real-world rallies in U.S. cities. Former special counsel Robert S. Mueller III testified last month that Russian meddling continues “as we sit here” and that other countries are racing to develop similar capabilities.
Zuckerberg wrote last year that Facebook has gotten much better at detecting what it terms “coordinated inauthentic behavior,” deploying “sophisticated systems that combine technology and people to prevent election interference on our services.” Facebook staffed “war rooms” in Dublin and Singapore before the European and Indian elections, though it let news organizations see them for only a few minutes and banned reporters from talking to employees.
Our new study raises doubts about how well Facebook’s systems work. We catalogued 6,817 official Facebook pages run by German political parties before May’s European Parliament elections: public pages of politicians, party regional branches, official factions, youth wings and national pages. In total, our data include nearly 220 million Facebook interactions for these pages from October 2018 to May before the E.U. elections.
What we found were bizarre patterns favoring the far-right, anti-immigrant party Alternative fur Deutschland. AfD is a medium-sized party in Germany that polled between 10 and 15 percent in the months leading up to the E.U. elections.
On Facebook, though, AfD dominated to an astonishing degree. AfD pages received 86 percent of total shares and 75 percent of all comments — four times the comments, and about six times the shares, of all other political parties combined. It is likely no other political party has ever dominated Facebook during a free election as thoroughly as AfD.
Worryingly, many of AfD’s likes and shares came from a cluster of 80,000 accounts with multiple features common in fake accounts but rare for human users. Many did nothing but frenetically promote AfD content, including obscure party pages that the accounts themselves didn’t follow.
Most of these accounts are obviously suspicious, such as the 17,579 profiles with seemingly random two-letter first and last names that each liked hundreds of AfD posts. Facebook’s rules require real names. Despite this, thousands of the most active pro-AfD users had first names such as “Mx” or “Ch” or “Ew” that cannot be given to anyone born in Germany, to say nothing of odd last names like “Pl” or “Ak” or “Dz.” Since the election, many of these accounts have changed their names and photos: “Ew” has become Marianne, and “Ch” now goes by Tamara.
AfD’s Facebook activity resembles other cases in which fake accounts were used to boost online visibility. Selling fake accounts is a thriving online business, and false Facebook profiles can fetch anywhere from 8 to 100 euros depending on country and how carefully the fake identity has been cultivated. When reporters at German broadcaster ZDF created a fake account to test the company, Facebook assured ZDF that the fake account was genuine. (Facebook has told us that it has taken action against some number of the suspicious accounts that we flagged in our study.)
Facebook often talks about how sophisticated social media manipulation can be and how sophisticated its own methods now are in response. The suspicious activity we found in the European elections, though, was anything but sophisticated. If the European elections were “an important test for us,” as Facebook chief operating officer Sheryl Sandberg declared last month, it was a test the company failed spectacularly.
If Facebook couldn’t figure out that thousands of coordinated accounts with random two-letter names were suspicious, it has little hope of preventing the next attack by the Russians, the Iranians or anyone else.