(Washington Post illustration/iStock)
Multiplatform editor

About US is a new initiative by The Washington Post to cover issues of identity in the United States. Sign up for the newsletter.


White-supremacist groups use social media as a tool to distribute their message, where they can incubate their hate online and allow it to spread. But when their rhetoric reaches certain people, the online messages can turn into real-life violence.

Several incidents in recent years have shown that when online hate goes offline, it can be deadly. White supremacist Wade Michael Page posted in online forums tied to hate before he went on to murder six people at a Sikh temple in Wisconsin in 2012. Prosecutors said Dylann Roof “self-radicalized” online before he murdered nine people at a black church in South Carolina in 2015. Robert Bowers, accused of murdering 11 elderly worshipers at a Pennsylvania synagogue in October, had been active on Gab, a Twitter-like site used by white supremacists.


About US logo (N/A/J.J. Alcantara/The Washington Post)

And just a few weeks ago, a 30-year-old D.C. man who described himself as a white nationalist was arrested on a gun charge after concerned relatives alerted police to his violent outbursts, including saying that the victims at the synagogue “deserved it.” Police say the man was online friends with Bowers.

“I think that the white-supremacist movement has used technology in a way that has been unbelievably effective at radicalizing people,” said Adam Neufeld, vice president of innovation and strategy for the Anti-Defamation League.

“We should not kid ourselves that online hate will stay online,” Neufeld added. “Even if a small percentage of those folks active online go on to commit a hate crime, it’s something well beyond what we’ve seen for America.”

In 2017, white supremacists committed the majority of domestic extremist-related killings in the United States, according to a report from the Anti-Defamation League. They were responsible for 18 of the 34 murders documented by domestic extremists that year.

The influence of the Internet in fostering white-supremacist ideas can’t be underestimated, said Shannon Martinez, who helps people leave extremist groups as program director of the Free Radicals Project. The digital world gives white supremacists a safe space to explore extreme ideologies and intensify their hate without consequence, she said. Their rage can grow under the radar until the moment when it explodes in the real world.

“There’s a lot of romanticization of violence among the far-right online, and there aren’t consequences to that,” said Martinez, who was a white-power skinhead for about five years. “In the physical world, if you’re standing in front of someone and you say something abhorrent, there’s a chance they’ll punch you. Online, you don’t have that, and you escalate into further physical violence without a threat to yourself.”

How hate spreads

Internet culture often categorizes hate speech as “trolling,” but the severity and viciousness of these comments has evolved into something much more sinister in recent years, said Whitney Phillips, an assistant professor of communications at Syracuse University. Frequently, the targets of these comments are people of color, women and religious minorities, who have spoken out about online harassment and hateful attacks for as long as the social media platforms have existed, calling for tech companies to take action to curb them.

“The more you hide behind ‘trolling,’ the more you can launder white supremacy into the mainstream,” said Phillips, who released a report this year, “The Oxygen of Amplification,” that analyzed how hate groups have spread their messages online.

Phillips described how white-supremacist groups first infiltrated niche online communities such as 4chan, where trolling is a tradition. But their posts on 4chan took a more vicious tone after Gamergate, the Internet controversy that began in 2013 with a debate over increasing diversity in video games and that snowballed into a full-on culture war. Leaders of the Daily Stormer, a white-supremacist site, became a regular presence on 4chan as the rhetoric got increasingly nasty, Phillips said, and stoked already-present hateful sentiments on the site.

Phillips said it’s unclear how many people were radicalized through 4chan, but the hateful content spread like a virus to more mainstream sites such as Facebook, Twitter and Instagram through shared memes and retweets, where they reach much larger audiences.

Unlike hate movements of the past, extremist groups are able to quickly normalize their messages by delivering a never-ending stream of hateful propaganda to the masses.

“One of the big things that changes online is that it allows people to see others use hateful words, slurs and ideas, and those things become normal,” Neufeld said. “Norms are powerful because they influence people’s behaviors. If you see a stream of slurs, that makes you feel like things are more acceptable.”

While Facebook and Twitter have official policies prohibiting hate speech, some users say that their complaints often go unheard.

“You have policies that seem straightforward, but when you flag [hate speech], it doesn't violate the platform’s policies,” said Adriana Matamoros Fernández, a lecturer at the Queensland University of Technology in Australia who studies the spread of racism on social media platforms.

Facebook considers hate speech to be a “direct attack” on users based on “protected characteristics,” including race, ethnicity, national origin, sexual orientation and gender identity, Facebook representative Ruchika Budhraja said, adding that the company is developing technology that better filters comments reported as hate speech.

Twitter’s official policy also states that it is committed to combating online abuse.

In an email, Twitter spokesman Raki Wane said, “We have a global team that works around the clock to review reports and help enforce our rules consistently.”

Both platforms have taken action to enforce these rules. Writer Milo Yiannopoulos was banned on Twitter in 2016 after he led a racist campaign against “Ghostbusters” actor Leslie Jones. In August, Facebook banned Alex Jones from its platform for violating its hate speech policy. The following month, Twitter also banned him.

But bad actors have slipped through the cracks. Before Cesar Sayoc allegedly sent 13 homemade explosives to prominent Democrats and media figures in October, political analyst Rochelle Ritchie says he targeted her on Twitter. She said she reported Sayoc to the social media site after he sent her a threatening message, telling her to “hug your loved ones real close every time you leave home.” At the time, Twitter told her that the comment did not violate its policy, but after Sayoc was arrested, the social media site said that it was “deeply sorry” and that the original tweet “clearly violated our rules.”

The rules themselves, even when followed, can fall short. Users who are banned for policy violations can easily open a new account, Matamoros Fernández said. And while technologies exist to moderate text-based hate speech, monitoring image-based posts, such as those on Instagram, is trickier. On Facebook, where some groups are private, it’s even more difficult for those who track hate groups to see what is happening.

Tech companies “have been too slow to realize how influential their platforms are in radicalizing people, and they are playing a lot of catch-up,” Neufeld said. “Even if they were willing to do everything possible, it’s an uphill battle. But it’s an uphill battle that we have to win.”

Learning from the past

While hate speech today proliferates online, the methods used by these hate groups is nothing new. The path to radicalization is similar to that used by the Nazis in the early 20th century, said Steven Luckert, a curator at the United States Holocaust Memorial Museum who focuses on Nazi propaganda.

“Skillful propagandists know how to play on people’s emotions,” Luckert said. “You play upon people’s fears that their way of life is going to disappear, and you use this propaganda to disseminate fear. And often, that can be very successful.”

The Nazis did not start their rise to power with the blatantly violent and murderous rhetoric now associated with Nazi Germany. It began with frequent, quieter digs at Jewish people that played on fears of “the other” and ethnic stereotypes. They used radio — what Luckert calls “the Internet of its time” — to spread their dehumanizing messages.

“They created this climate of indifference to the plight of the Jews, and that was a factor of the Holocaust,” Luckert said. “Someone didn’t have to hate Jews, but if they were indifferent, that’s all that was often needed.”

The antidote, Luckert says, is for people to not become immune to hate speech.

“It’s important to not be indifferent or a passive observer,” Luckert said. “People need to stand up against hate and not sit back and do nothing.”

Martinez, of Free Radicals, said that to combat the spread of hate, white Americans need to be more proactive in learning about the history of such ideologies.

She said she recently took her 11-year-old son to see the new lynching memorial in Alabama that memorializes the 4,000 victims.

She said her son was overwhelmed by what he saw. Security guards who saw the boy attempting to process the display suggested that he ask his mother to get ice cream, a treat to ease the emotional weight of the museum. Martinez refused.

“He’s a white man in America. I’m not going to let him ‘ice cream’ his way out of it,” Martinez said. “We have to shift this idea that we are somehow protecting our children by not talking about racism and violence. We can’t ice cream it away. We have to be forthcoming about our legacy of violence.”