In an attempt to limit harassment of its users, Twitter is changing the rules for what you are allowed to tweet.
Here’s what you need to know:
- Abusive behavior, once part of an “abuse and spam” section, has now been devoted the largest section of the rules. It states, “We do not tolerate behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user’s voice.”
- You cannot tweet “hateful conduct,” which means: “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.”
- Twitter will now attempt to assist people who have threatened suicide or self-harm on Twitter, including “reaching out to that person expressing our concern and the concern of other users on Twitter or providing resources such as contact information for our mental health partners.”
- The definition of “violence” now “includ[es] threatening or promoting terrorism.”
- The rules explicitly state that if they are not followed, accounts may be temporarily locked or permanently suspended.
The announcement of these changes is the latest in a series of attempts by the social-media powerhouse to fix its dismal reputation for dealing with harassment. Last year, Twitter CEO Dick Costolo wrote in a memo that he is “ashamed” at how poorly Twitter has handled trolls.
“We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them. Everybody on the leadership team knows this is vital,” he wrote.
The memo signaled a long-awaited move for those who deal with digital harassment — which turns out to be almost everyone. The Pew Research Center found that 73 percent of Internet users have witnessed online abuse, from name calling and physical threats to stalking and sexual harassment. High-profile hotbeds of abuse — such as the attacks on people advocating for inclusion of women in gaming, better known as “Gamergate” — are just a slice of the world’s largest harassment pie, which targets minorities, religious groups, journalists, people who express political viewpoints, celebrities, gay people, homophobic people, elderly people — like we said, almost everyone.
Perhaps that’s why some feel that changing the rules to ban speech against specific groups is going too far: It is vague enough to frame any non-positive speech as “hateful conduct.” The National Review’s Katherine Timpf went on “Fox and Friends” to argue that Twitter executives are harder on conservatives than they are on liberals, and that this policy of trying to make the site a “nice happy placeland” will make the situation worse.
“This language is so vague, that you could really get anyone in trouble that you want to,” Timpf said.
The rule-change announcement didn’t name any specific group it was trying to shoo or protect. But an obvious target is the Islamic State, the terrorist group whose social media savvy has immensely accelerated its growth. The Brookings Institution found that there were at least 46,000 Islamic State-supporting Twitter accounts in 2014. Accounts like these, which have helped the Islamic State claim responsibility for terrorist acts, cause a significant problem for a platform that prides itself on promoting free speech.
In its rule changes, Twitter followed in the footsteps of Facebook by explicitly calling out all terrorism, rather than simply “violence” or a specific terrorist organization. The rules now state, “You may not make threats of violence or promote violence, including threatening or promoting terrorism.”
Liked that? Try these: