Live-streamed from a camera mounted on the Buffalo gunman’s helmet, the video is hauntingly gruesome — a first-person view as he fires a rifle into 10 people, some of them crawling on the supermarket floor. When he discovers a light-skinned man hiding in a checkout aisle, the gunman spares him, saying, “Sorry.”
It is exactly the kind of horrific terrorist video that the world’s biggest tech companies have vowed to block. But two days after the shooting, the footage was still widely available online — just as the gunman had hoped, according to a screed written beforehand, bringing more attention to his racist cause.
The episode shows how little has changed in the three years since a live-streamed rampage at mosques in Christchurch, New Zealand, revealed how mass shooters could harness leading social platforms to make their carnage go viral.
When the Buffalo gunman broadcast the shooting in real time Saturday on the live-streaming site Twitch, only 22 people were watching, and company officials said they’d removed it with remarkable speed — within two minutes of the first gunshots.
But all it took was for one viewer to save a copy and redistribute it online. A jumble of video-hosting sites, extremist message boards and some of Silicon Valley’s biggest names did the rest, ensuring millions of people would view the video.
One copy made its way onto the little-known video site Streamable, where, thanks to links posted on much larger sites, it was viewed more than 3 million times before it was removed. One link to that copy on Facebook received more than 500 comments and 46,000 shares; Facebook did not remove it for more than 10 hours.
“Terrorism is theater,” said Emerson T. Brooking, senior fellow at the Atlantic Council’s Digital Forensic Research Lab, which researches how information spreads online. “The purpose of terrorism is always to reach the greatest number of people possible with the most horrific or spectacular attack that you can perform.”
Live-streaming, he added, enables “terrorists to have a much greater impact. It essentially rewards and incentivizes attacks which are less sophisticated, and may kill much fewer people, but will still strike fear and horror in millions.”
A spokesperson for Meta, Facebook’s parent company, said it was working to permanently block links to the video but that they had seen “adversarial” instances of people trying to circumvent its rules to share the video.
The suspect, Payton Gendron, 18, is accused of killing 10 people and injuring three others at a Tops Friendly Markets grocery store in Buffalo, police said. He had espoused a theory popular among white supremacists and on Fox News that White people are being systematically replaced in the United States. Eleven of the 13 people he shot are Black.
Guarding against live-streamed violence is regarded as one of the Internet’s toughest challenges, largely due to the Internet’s scale. More than 8 million people stream on Twitch every month, broadcasting more than 2 million hours of video a day. Some 90,000 channels are streaming at any given time, company data show.
And because any one video can be endlessly duplicated and re-uploaded, extinguishing the videos is almost impossible. Mainstream platforms can attempt to block it on their own sites, but they are mostly powerless to prevent third-party sites from hosting it. There are many such alternatives, like Streamable, around the Web.
The 180-page document allegedly written by the suspect, which law enforcement officials have said they are investigating and was originally uploaded to Google Drive, is filled with references to using video to achieve his cause. “Live-streaming this attack gives me some motivation in the way that I know that some people will be cheering for me,” he wrote.
The suspect wrote that he had started browsing 4chan, an anonymous board where users celebrate racist violence, two years ago while he was bored during the pandemic. It led him to graphic footage from the Christchurch massacre that the gunman, Brenton Tarrant, had recorded himself with a helmet-mounted camera. The killings aired for 17 minutes in real-time on Facebook Live before the video was removed; 51 people were killed.
The video led him to Tarrant’s own extremist screed, the Buffalo suspect wrote, saying he “started to think about committing to an attack.” He’d said in his screed that he wanted to live-stream the video to help “increase coverage and spread my beliefs.”
The suspect had created a Twitch account in 2017, but he’d used it primarily to watch other streamers’ videos, according to account data and the writings he’d posted online. (Twitch is owned by Amazon, whose founder, Jeff Bezos, owns The Washington Post.)
In the documents, he said he had chosen to stream on Twitch instead of Facebook, because “only boomers actually have a Facebook account nowadays” and its rules could limit the video’s reach.
He wrote in the documents that he had tested streaming to Twitch in March, saying he had hoped they didn’t cancel his stream “before I do anything interesting.”
Before the shootings Saturday, he invited people to an online space on the chat service Discord where he posted a link to his Twitch stream and his racist screed and wrote, “HAPPENING: THIS IS NOT A DRILL.”
During the stream, which he titled “Test for real,” he broadcast for about 25 minutes, mostly showing him driving and talking to himself. Eventually, he pulls to a stop in front of the supermarket, opens his car door and immediately guns down a woman in blue who is walking outside the store. The video continues with the carnage inside.
At its peak, the live stream had an audience of 22 simultaneous viewers, according to screenshots from the footage.
Twitch was able to remove the stream within two minutes after the gunman began shooting, said Angela Hession, the company’s head of trust and safety.
The site, she said, has an all-hours escalation system in place to address urgent reports, such as live-streamed violence. But she declined to specify how the company had been able to react so quickly, saying it could enable bad actors to exploit the site.
Shortly after Twitch removed the video, however, at least one viewer who’d watched in real-time took a copy saved on their computer and shared it — allowing it to be downloaded, re-uploaded and shared to far-right message boards, neo-Nazi channels on the messaging service Telegram, and sites devoted to gory and uncensored videos.
Within hours of the shooting, long clips of the video had begun surfacing widely, revealing the victims’ murders in brutal detail. On 4chan and other sites, users discussed ways they could ensure the video would stay online.
One copy was posted to Streamable, a service that is used primarily to share clips from Twitch gaming streams. A watermark on the video shows it was saved via the Icecream Screen Recorder, an app for capturing videos.
Streamable was bought last year by Hopin, a London-based videoconferencing service. The company’s terms of service say it bans video that promotes terrorism or acts of violence.
When The Post sent Streamable officials an email early Sunday alerting them to the video, the company sent an automatic response saying its officials worked only between Monday and Friday. The video was removed Sunday afternoon.
On its website and elsewhere, Streamable says it is based in Wilmington, Del. A visit to the address listed on its website Monday revealed a building with frosted windows, locked doors and a “For Sale” sign out front. Two men working across the street said they believed it had not been in use for months.
A second Delaware address associated with the company turned out to be an office building that serves as a mailing address for hundreds of different companies, allowing them to incorporate in a business-friendly state without actually having employees there, according to the receptionist who answered the door.
Streamable founder Armen Petrosian did not respond to a phone message or email seeking comment.
After the Christchurch massacre in 2019, major tech companies created an industry-wide system, the Global Internet Forum to Counter Terrorism (GIFCT), designed to respond to future attacks. Adapting a technology used for years to block videos of child sexual abuse, the group built a system to automatically detect and remove videos from terrorist attacks they’d added to a blacklist database.
Within a few hours of the Buffalo shooting, the group launched what it calls its “Content Incident Protocol,” its top-level alert to block videos from the websites of the forum’s four founding companies — Facebook, Microsoft, Twitter and YouTube — and newer members, including Airbnb, Discord and Amazon.
The emergency process had been activated twice before, blocking videos from a gunman’s Twitch-streamed attack at a synagogue in Halle, Germany, in 2019, and from another shooting at a shopping center in Glendale, Ariz., in 2020. The group has argued that its tools have allowed it to greatly restrict the sharing of the grisly videos online.
But the Buffalo shooting also revealed shortcomings. The database requires someone to flag the videos after they’ve been recorded, preventing its use for real-time alerts. And companies that are not members of the industry group — including Streamable and many other niche sites that host online videos — are not subject to the automatic bans.
Someone uploading the shooting video to Facebook probably would have had it automatically caught and blocked. But the morning after the shooting, people were able to post links to the Streamable video on Facebook that remained on the site for more than 10 hours.
Some people posted screenshots on Sunday showing they had attempted to report the link to Facebook but were told it did not violate the site’s community standards. A Meta spokesperson said the video did in fact violate Facebook rules and that the company has since “blackholed” links to video of the attack, preventing people from posting it again.
Twitter and other GIFCT member companies said they were also removing videos from the attack.
Brooking said the proliferation of the Buffalo shooting video showed that the tech industry had made some progress since Christchurch but also revealed how far it has to go — particularly in addressing the flow of prohibited content between smaller platforms like Streamable, which often lack dedicated content moderation staff, and larger platforms like Facebook and Twitter, which can deliver videos hosted elsewhere to mainstream audiences.
“It’s a perennial problem” in the tech industry, Brooking said. “No company ever raised money because of how good its content-moderation system was.”