The Washington PostDemocracy Dies in Darkness

Opinion The bad guys on social media are learning new tricks

(Chris Ratcliffe/Bloomberg)

The story of malign behavior on social media may seem old at this point, yet somehow bad actors are managing to keep it fresh. The end-of-year adversarial threat report from Meta is a much-needed review of the latest tactics — and just as essential a warning.

The company formerly known as Facebook routinely shares how meddlers around the world are making mischief on its platforms, and how its investigative teams are trying to stop them. These missives have focused on what’s called coordinated inauthentic behavior, a frame that emphasizes conduct rather than content; no matter what you want to say, you can’t do it by creating an army of fake accounts. But the standard doesn’t capture a host of other harms also organized by malignant networks. Meta is adjusting in an attempt to keep up.

December’s report homes in on two new categories that Meta is targeting for disruption — “brigading” and “mass reporting.” Both of these terms-of-service breaches are routinely carried out by real accounts in addition to fake ones, and both of them do something more than the typical flooding of the information space with lies or propaganda: They attempt also to silence those they disagree with by drowning them out.

Brigading means commenting or posting repetitively and at volume to harass or silence others. Take, for instance, Meta’s removal of a network in Italy and France linked to an anti-vaccination movement known as V_V. In this case, the brigadiers planned on messaging platform Telegram to intimidate doctors, journalists and media out of discussing coronavirus vaccination. Where one vicious or misleading post might not have much of an impact, a thousand can make someone never want to speak up again — and intimidate those who identify with a bombarded individual.

Mass reporting, by contrast, involves cooperating to get others incorrectly booted off Facebook. Meta points to a network in Vietnam that conspired to stymie the speech of government critics by flagging posts for various rules violations. Sometimes, the network would even set up accounts impersonating targets and then report the targets for impersonating them. This is a new way for people in power to shut down people who seek to challenge it.

The lines Meta draws in creating these new categories aren’t always so clear. What counts as sufficient coordination for a campaign to qualify as brigading? What happens when people work together to urge the reporting of content they believe does violate policy, or should violate policy? Twitter faced one such conundrum this month when white supremacists took advantage of a new rule that lets users request removal of photos and videos of themselves to push the platform to purge identifying material posted by anti-extremism researchers, activists and journalists.

Social media sites must all navigate these thickets, providing as much detail and clarity to the public as possible about coordinated inauthentic behavior. Yet above all else, they must keep moving, because the bad actors won’t slow down.

The Post’s View | About the Editorial Board

Editorials represent the views of The Post as an institution, as determined through debate among members of the Editorial Board, based in the Opinions section and separate from the newsroom.

Members of the Editorial Board and areas of focus: Opinion Editor David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy, legal affairs, energy, the environment, health care); Lee Hockstader (European affairs, based in Paris); David E. Hoffman (global public health); James Hohmann (domestic policy and electoral politics, including the White House, Congress and governors); Charles Lane (foreign affairs, national security, international economics); Heather Long (economics); Associate Editor Ruth Marcus; and Molly Roberts (technology and society).

Loading...