The Washington PostDemocracy Dies in Darkness

Facebook says it will now block white-nationalist, white-separatist posts

(Jason Alden/Bloomberg News)

Facebook said Wednesday that it will begin banning posts, photos and other content that reference white nationalism and white separatism, revising its rules in response to criticism that a loophole had allowed racism to thrive on its platform.

Previously, Facebook had prohibited users from sharing messages that glorified white supremacy — a rhetorical discrepancy in the eyes of civil rights advocates who argued that white nationalism, supremacy and separatism are indistinguishable and that the policy undermined the tech giant’s stepped-up efforts to combat hate speech online.

Facebook now agrees with that analysis. In a blog post announcing the ban on “praise, support and representation of white nationalism and separatism,” the company said, “It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.” The new policy also applies to Instagram.

The rise and spread of white nationalism on Facebook were thrown into sharp relief in the wake of the deadly Unite the Right rally in Charlottesville in 2017, when self-avowed white nationalists used the social networking site as an organizing tool.

The following year, Motherboard, a tech publication owned by Vice, obtained internal documents meant for training and guiding content reviewers that revealed Facebook treated the terms differently: The materials showed that Facebook permitted “praise, support and representation” of white nationalism and white separatism “as an ideology.” The policy drew sharp rebukes from civil rights advocates, who have argued for years that the terms are interchangeable.

Facebook’s decision comes one week after the company made another announcement to appeal to long-standing complaints from civil rights advocates: The company prohibited advertisers from excluding minorities and other protected groups from ads for housing, employment and credit.

Facebook agrees to overhaul targeted advertising system for job, housing and loan ads after discrimination complaints

While Facebook long had banned white supremacy, the company said in its blog post Wednesday that it “didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity.”

But conversations over the past three months with civil rights groups and academics led Facebook to rethink its practices, executives wrote. As part of the dialogue, the company reviewed a list of figures and organizations and found overlap between white nationalism, separatism and supremacy.

Civil rights groups applauded the move. “There is no defensible distinction that can be drawn between white supremacy, white nationalism or white separatism in society today,” Kristen Clarke, president and executive director of the Lawyers’ Committee for Civil Rights Under Law, said Wednesday in a statement.

The organization had pushed Facebook for months to change its policies, pointing to pages such as “It’s okay to be white,” which has more than 18,000 followers and has regularly defended white nationalism. Another, called “American White History Month 2,” often posted white-supremacist memes, according to the Lawyers’ Committee. A cached version of the page from late February showed it had more than 258,000 followers before it went offline.

The anonymous message board houses some of the most controversial content online. Here’s a look behind its rise in influence. (Video: Adriana Usero, Melissa Macaya, Deirdra O'Regan/The Washington Post)

Facebook’s new policy comes as the company continues to struggle to take down other content that attacks people on the basis of their race, ethnicity, national origin and a host of other “protected characteristics.” Between Jan. 1 and Sept. 30, 2018, Facebook took action against 8 million pieces of content that violated its rules on hate speech, according to its latest transparency report. Facebook is not legally required to remove this content, but its rules prohibit it.

To help enforce its policies, Facebook has developed and deployed artificial-intelligence tools that can spot and remove content even before users see it. But the technology isn’t perfect, particularly when it comes to hate speech. The company removes only about 50 percent of such posts at the moment users upload them, it said last year. As a result, such content still can go viral on Facebook — a reality the company confronted this month when users continued to upload videos of the mass shooting in New Zealand that left 50 people dead. The shooter specifically sought to target Muslims, authorities said.

How social media’s business model helped the New Zealand massacre go viral

To that end, civil rights groups said, Facebook still had considerable work to do to address the spread of hate speech on its platform.

“As we have seen with tragic attacks on houses of worship in Charleston, Pittsburgh, New Zealand, and elsewhere, there are real-world consequences when social media networks provide platforms for violent white supremacists, allowing them to incubate, organize, and recruit new followers to their hateful movements,” Clarke said.