The Washington PostDemocracy Dies in Darkness

Opinion Where do social media platforms draw the line between expression and exploitation?

Stephen K. Bannon walks on his terrace at Hotel de Russie in Rome on Sept 22, 2018. (Jabin Botsford/The Washington Post)

SOCIAL MEDIA allows those who might otherwise be voiceless to speak truth to power — but it also allows the powerful to speak all manner of lies. A new study on the vast web of disinformation woven by a network of media with ties to a Chinese businessman underscores the difficulty of drawing the line between expression and exploitation.

The Post reports on a study by analysis company Graphika that navigates the sprawling set of media outlets, local-action groups and accounts on Facebook, Twitter and their peers giving voice to some of the views of Guo Wengui, a real estate developer with connections to right-wing strategist Stephen K. Bannon. The network churns out content including falsehoods in the form of videos, memes and more that its members then amplify across the Web. The efforts started with a focus on denigrating both the Chinese Communist Party and anti-CCP dissidents living abroad, but more recently, conspiracy theories surrounding the 2020 U.S. election and covid-19 have abounded. A spokesman for Mr. Guo denied the onetime billionaire controlled content in the network.

The study reveals how the well-resourced can manipulate the online world to create offline consequences: These volunteers and paid workers, called “ants,” appear to coordinate about what to post and when and where to post it. Multiple activists who were targeted have been harassed and even physically assaulted at their homes. The report also reveals a murky area in platforms’ rules regarding malicious campaigning. Some of the accounts under the umbrella have been taken down for, say, “spam and platform manipulation” at Twitter — and certainly specific posts advocating imminent violence already breach most firms’ terms of service. But the question of whether the network as a whole does, or should, violate platforms’ policies is more difficult to answer.

Platforms have always been leery of becoming so-called arbiters of truth. This stance has led them to embrace policies that focus on behavior rather than on content, except where that content is particularly dangerous — and first up for punishment is behavior that is somehow “inauthentic,” or manipulative. That’s an easy label to apply when Internet Research Agency trolls are masquerading as Black Americans, or when a single individual is operating several accounts. But the participants covered by the Graphika report appear to be mostly authentic. They are real people with a real devotion to Mr. Guo’s ideas, yet, at the same time, their efforts are overwhelming the conversation about some topics.

Where does this leave platforms? Setting standards that are too strict around coordination threatens to stamp out the sort of grass-roots organizing that makes social media a boon to democracy. Standards that take into account the aims of coordination run into trouble, too. As we’ve seen, when it comes to false narratives about vaccines or the integrity of the vote, content does matter — but who’s to say where an awareness campaign ends and a propaganda crusade begins? As Harvard Law School lecturer Evelyn Douek put it, “Your information operation is my activism.”

Read more:

Josh Rogin: There’s Chinese interference on both sides of the 2020 election

Aaron Huang: Chinese disinformation is ascendant. Taiwan shows how we can defeat it.

Vanessa Molter, Renee DiResta and Alex Stamos: As Chinese propaganda on covid-19 grows, U.S. social media must act

David Ignatius: Russia’s disinformation campaign will keep rolling, as long as Republicans are gullible enough

Brian Klaas: He worked in Russian media. He recognizes the same tactics at Fox News.