correction

A previous version of this article incorrectly identified streamer Tanya “Cypheroftyr” DePass as a Twitch Ambassador.

Every day, Daniel “Simooligan” Larsh, a gay man who took up Twitch streaming in 2020, faces harassment. Threatening phone calls, trolling messages, Twitch chat spam, Twitter accounts that publicize his personal information, fake accounts that purport to be him while spreading racist language — these are just a smattering of the tools his abusers employ. They’ve been at it for over six months.

Larsh is not a big streamer. His broadcasts generally pull between 15 and 50 concurrent viewers, compared to the tens or hundreds of thousands regularly drawn by Twitch’s most popular stars. And yet, the harassment campaign against him has gotten so bad that he’s largely quit streaming to preserve his own mental health. But in the wake of the recent explosion of “hate raids” — in which trolls overwhelm streamers’ chats with bot-powered fake accounts that spam hateful messages — Larsh couldn’t stay quiet. He has firsthand experience with the groups marshaling these ugly stampedes. And more importantly, he knows how they get away with it.

Hate raids have existed in various forms for years but became a larger Twitch concern earlier this month after a streamer by the handle RekItRaven posted a video of themselves getting raided, starting the hashtag called #TwitchDoBetter. Spurred on by marginalized streamers who felt Twitch should do a better job of protecting them, the hashtag trended multiple times in a single week. Twitch acknowledged the movement on Twitter a few days later, saying it had rolled out an update to “better detect hate speech in chat” and that additional tools were on the way, but would not arrive until later this year. Since then, though, streamers say hate raids have only grown in number and ferocity.

Hate raids are a complex dilemma born of an era in which communities spin intertwining webs between platforms. The damage might be done on Twitch, but it’s organized on chat platforms like Discord and signal-boosted on YouTube and far-right-friendly video sites like BitChute. Larsh and a friend who streams under the name Leppely have spent months documenting the efforts of their own harassers, many of whom have gone on to contribute to Twitch’s hate raid epidemic. (Leppely and several other streamers mentioned in this story declined to share their real names due to harassment concerns.) To wit, in the hate raid video that first propelled #TwitchDoBetter into the upper echelons of virality, spam messages directed at RekItRaven proclaimed that “this channel now belongs to the KKK” and named Larsh their “grand dragon.” Trolls did this in an attempt to foist blame onto him. It is far from the only tactic they employ to mislead.

“If you search my username on Twitch, a whole bunch of accounts come up that trolls made, and some of those accounts use [personal information]," said Larsh. “Sometimes they would target my friends as well, so my stream became a liability. People started to become afraid to come to my stream, and so eventually I had to give it up.”

Larsh believes harassers decided to target him because they managed to get a reaction out of him by spewing the n-word over voice chat while he was playing multiplayer horror game “Phasmophobia” in February. Since then, he says he’s become a sort of “mascot” to them.

Multiple Discord servers in which trolls gather — which The Post has joined and viewed screenshots and video of — seem to bear this out. On servers with names like “Simooligan Legion” and “Taliban Legion,” users crack jokes (mostly focused on the n-word and other taboos), share explicit images to get a rise out of each other and plan raids, which they often call “visits.” If they manage to force an emotional reaction out of streamers, they record those moments and turn them into compilation videos they can watch and share for laughs.

They are not subtle about their motivations. During a July conversation on a hate raid Discord, one person questioned the use of Larsh’s Twitch handle in a raid, saying they didn’t know who he was. Another user justified the name-drop, saying: “Simooligan’s a f----- who cried about being called a n----- in ‘Phasmophobia.’ ”

Since Twitch’s first statement earlier this month, hate raids have gotten worse, streamers say. Much worse.

“My last stream, it was literally three hours of them hate raiding me,” said RekItRaven, who uses they/them pronouns. “They are doxing people now. So they are finding people’s information and literally throwing it out onto Twitch. … I’m in a situation where I really, genuinely need to protect myself and my family.”

Raven, who says they have been forced to inform authorities of their situation and meet with a lawyer, is far from alone. At this point, it is difficult to find a marginalized streamer who has not been hate raided, and many have had their personal information — including real names and addresses — posted online. To call attention to how bad things have gotten, a few, including Raven, are organizing a virtual strike for Sept. 1 under the hashtag #ADayOffTwitch.

While smaller streamers have borne the brunt of these attacks, even those who’ve worked directly with Twitch over the years as part of things like its “Ambassador” promotional program have been plagued by the raids.

“I’ve only streamed twice in the last six days and have been hate raided three times,” said Tanya “Cypheroftyr” DePass, a longtime streamer and diversity advocate. “They can’t even keep Ambassadors safe on their own platform, which is ridiculous.”

Hate raid tactics have evolved over time, with bot accounts becoming a hallmark more recently. This has forced streamers to adapt as well. In the absence of meaningful defenses provided by Twitch, one of Larsh’s chat moderators, who goes by the handle Modest Mishmash, began creating a suite of tools to combat bots a few months ago — just before the recent hate raid explosion.

“It was at that time I decided that manual banning and reporting isn’t going to stop automated attacks,” said Modest, whose “Smash Security Suite” can identify and auto-delete everything from obscene imagery to names and phone numbers. “However many moderators might be in chat, a bot could generate accounts faster than moderators can ban them.”

A streamer named HackBolt, who maintains a constantly updated list of all known bot accounts so that other streamers can preemptively ban them from their chats, said that he currently has “upward of 362,000” bots on his list, with that number increasing by between 2,000 and 7,000 per day. Twitch bans bots and other malicious accounts — earlier this year, it announced that it had removed 7.5 million of them — but it is evidently not as fast as the people making them.

“Account age is no longer a reliable factor to single out bots,” said Modest, “with real accounts getting hacked and then renamed into something offensive to be used as a bot account, or accounts following a streamer [en masse] and then spamming the chat weeks later to avoid follower-only mode. Some of the attacks are really simple and you immediately know that there’s just some person with a script doing it. Some are extremely dedicated and planned out. It is clear that there is more than one group behind it.”

On paper, it would seem like there’s a relatively simple solution to this problem: Discord could just ban the groups in question. In a statement to The Post, the platform said it does just that.

“We have a zero-tolerance policy against hate and violence of any kind on the platform and proactively monitor our service for activity that violates our Terms of Service and Community Guidelines, including raiding,” said a Discord spokesperson. “When we become aware of such activity, we take immediate action, including banning users and shutting down servers, and when appropriate, engaging with authorities.”

The problem, as outlined by Larsh and Leppely, is that users either just make new Discord severs or abandon old ones before they can ever get shut down in the first place. Meanwhile, many of these servers lock new users out until a trusted member has vouched for them, making it difficult to infiltrate and report servers for specific indiscretions.

Similarly, when individual users get banned, they just make new accounts. In chat logs reviewed by The Post, there are multiple instances of users complaining about how their accounts have gotten banned or requesting access for new accounts. In this way, Discord’s problem mirrors Twitch’s problem: It’s easy to create an account — perhaps too easy. As long as that remains the case, harassing groups can attack like hydras, unafraid to lose one, two, or three hundred heads if it means they damage their targets.

Where Twitch is concerned, streamers’ demands have not changed.

“Put a CAPTCHA on account creation [to] slow down and try to stop these bot creation tools,” DePass said. “Also, require [two-factor authentication] for making accounts. Get rid of allowing multiple accounts created from the same email, verified or not. … [Right now] it’s harder to [sign up] to a newsletter or a blog than it is to make a Twitch account.”

Companies like Twitch and Discord have an incentive to avoid doing this: The more friction they create between people finding out about their services and using them, the more likely it becomes that people will back out before joining. Fewer users means less money. Twitch is also an extremely top-heavy platform; on Aug. 15, for example, the top 5,000 streamers pulled in 88 percent of overall viewership. This left over 100,000 smaller streamers who were live at the time with just 12 percent of overall viewership. This ratio varies, but it is indicative of Twitch’s standard split, which informs its priorities. Proactively created tools and features often focus on big streamers and monetization — not small streamers and harassment that’s predominantly focused on them.

“It’s all about the numbers,” Larsh said. “That’s what it comes down to. I’m just a number.”

With pressure mounting from small streamers and a handful of larger ones, Twitch addressed the issue of hate raids again late last week. On Twitter, the company said that it has been “continually updating our sitewide banned word filters to help prevent variations on hateful slurs, and removing bots when identified.” It also reiterated its goal of building tools that help streamers combat the issue of ban evasion on an individual channel level. The company explained, however, that it has been relatively silent on the matter out of necessity.

“As we work on solutions, bad actors work in parallel to find ways around them — which is why we can’t always share details,” read one of the messages the company posted on Twitter.

Companies in these sorts of situations typically go into “turtle shell defense mode” not out of callousness, but because it’s the only way they’ve found to react to new vulnerabilities without tipping their hand to those who wish to exploit them, said Patrick “KosmicStoat” Damon, a cybersecurity expert and longtime streamer.

“There is a reason when we look back at any infosec issues that Facebook has had — any infosec issues that any social media or major corporation have — you don’t hear about it immediately,” said Damon, who noted that they are trans and feel for other marginalized streamers in this moment. “You hear about it weeks or months afterward because they have to focus on the defense first. It’s standard fare. It’s just that Twitch is in a unique situation with it having essentially hundreds of thousands of contractors.”

In the meantime, streamers continue to suffer and face potential danger, a fact Raven and others hope to draw attention to with their upcoming strike.

“If we are, for a day, collectively standing in the amount of silence that Twitch has shown us, I think that speaks volumes,” Raven said. “I’m not expecting Twitch to tell me exactly everything that’s going on, but I do expect to be updated with a timeline. Tell me that you’re working on something and you’re going to try to roll it out within [for example] two weeks. That we can work with, right? Some form of transparency is nice, because these blanket statements are not cutting it.”

Read more: