Twitter announced on Tuesday it will update its policy against hateful conduct to cover posts that dehumanize people based on their religion, bringing the social network roughly in line with Facebook and YouTube and marking the latest change to hate speech rules that critics say are ineffective.
The update builds on Twitter’s existing policies that bar the promotion of violence, threats and harassment against people in protected groups. But the company has acknowledged that what many people consider abusive tweets may not actually violate Twitter’s rules. New policies banning dehumanizing language are an attempt to close that gap, Twitter said.
“We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within,” Twitter said in a blog post published Tuesday. “Our primary focus is on addressing the risks of offline harm, and research shows that dehumanizing language increases that risk.”
In the blog post, the company also listed several examples of tweets that would violate its new rules.
Twitter won’t automatically flag offending posts, but will review them when they are reported by users. The company said it will require that dehumanizing posts are removed from the platform, starting Tuesday. Flagged tweets that were sent before Tuesday will still need to be deleted, the company said, but will not directly lead to the suspension of the account holder because the posts were tweeted before the new rule took effect.
Twitter’s update tracks similar policies already in place on other social media networks. Facebook prohibits dehumanizing speech and statements of inferiority aimed at people based on “protected characteristics,” which includes race, ethnicity and religious affiliation. YouTube also bars dehumanizing language that targets individuals or groups based on religion and other protected “attributes,” such as nationality and sexual orientation.
But how social networks enforce those rules continues to draw criticism. Facebook’s vision to place private communications and intimate groups at the center of the platform’s future has led civil rights groups and other advocates to question how the company will monitor hate speech, harassment and misinformation. Last week, a ProPublica investigation revealed a secret, private Facebook group made up of current and former Border Patrol agents that contained callous jokes about the death of migrants and an obscene illustration of Rep. Alexandria Ocasio-Cortez (D-N.Y.) and President Trump.
And last month, YouTube drew vocal criticism over its handling of reported harassment targeting a journalist because of his race and sexuality, leading the company to issue confusing statements, clarifications and announcements. The episode highlighted what many LGBT creators say is a hypocritical stance of the company: promoting itself as an inclusive online space but one that facilitates bigoted speech.
Twitter said its rules against dehumanizing language will first cover religious groups, but will eventually expand to other protected classes. Before it broadens the rule, however, the company said it needs to assess additional factors, including: how to protect conversations within marginalized groups in which people use “reclaimed” terminology and to ensure its enforcement takes context into account and reflects the severity of the violations.