The Washington PostDemocracy Dies in Darkness

Opinion The Buffalo massacre shows how far social media sites still have to go

A small memorial set up at Tops supermarket in Buffalo on May 16. (Jeffrey T. Barnes/REUTERS)
Placeholder while article actions load

The alleged gunman in Saturday’s mass shooting in Buffalo had a chilling explanation for sharing his racist rampage in real-time: “Live-streaming this attack gives me some motivation in the way that I know that some people will be cheering for me.” Footage from the attacks in Christchurch, New Zealand, three years ago reportedly also spurred him on. Platforms must stop these videos from spreading before they have the chance to inspire more of their ilk.

Live-streamed violent video presents a twofold challenge. First comes the question of how to detect and remove video while it is being recorded — using some combination of automation and human review to yank footage of an atrocity off a site even as it is occurring. Next is the matter of detecting and removing new versions of that video, or links to those versions, created after the event has taken place.

Initially spotting violence in a live stream can prove difficult, especially for smaller platforms. Larger ones are often wary of automatically removing anything that looks like it could be an attack — lest such a tweak result in the takedown, for example, of videos capturing police brutality. But as it turns out, a video’s initial sharing may pale in importance to its resharing. The shooter in Buffalo streamed his rampage on Twitch and shared it on the chatting service Discord. The suspect’s original Twitch stream had only 22 viewers, and Twitch says it removed the video within an impressively fast two minutes after the shooting began.

Two minutes, however, was long enough for viewers to make copies. One of these duplicates alone, hosted on an obscure site called Streamable, racked up more than 3 million views before its takedown; at least one Facebook post linking to the footage was online for 10 hours. The Global Internet Forum to Counter Terrorism, a collaborative effort to disrupt violent extremism founded by social media companies in 2017, has made progress in rooting out offending images and videos, but there’s obviously more work to be done.

Still another issue is whether the right kind of tools might have prevented the murders in Buffalo from happening in the first place. The suspect in this case began mapping out his attack on Discord as long as five months ago, including in public channels. Sites of all sizes should devote more resources to scanning text for words that could indicate a plot as it is taking shape.

Preventing the spread of terrorist videos is a cross-platform problem. The failure of major services such as Facebook and Twitter to get rid of malign links, or of Discord to spot the extensive planning of this evil act, is alarming but also instructive. Lawmakers and everyday Americans are right to demand more of Big Tech, but littler tech is also an important part of the picture. Better performance from more responsible actors will go only so far to stymie the threat as long as horrific content, and copy after copy of that content, spreads freely across the rest of the Web.