Everyone knows Twitter has a problem with online abuse, especially Twitter. Even the company's chief executive has candidly said that the firm needs to be better about shielding its users from harassment -- a move that's essential to protect its reputation and its bottom line.
On Tuesday, the company announced three new changes on its official blog that take steps to combat some of the worst abuse problems. For one, Twitter is changing the language of its policy dealing with threats to cover more types of abuse, said Shreyas Doshi, the company's director of product management. From the post:
"[We] are updating our violent threats policy so that the prohibition is not limited to 'direct, specific threats of violence against others' but now extends to 'threats of violence against others or promot[ing] violence against others.' ”
The upshot of that change, Doshi said, is that it gives Twitter more leeway to combat abuse. So, for example, if a user faced death threats on the site, Twitter can now take action not only against those directly threatening him, but also against those egging on or even aiding his abusers.
The company is also changing the way it acts against harassers. When an account is reported, Twitter now reserves the right to freeze the account for a period of time, to compel abusers to delete tweets that break the rules and also to require a users' phone number to reinstate their account. Essentially, Twitter is putting users in time-out and making it easier to identify them down the line.
That's an important step. One of the biggest problems Twitter faces when it comes to abuse is the "whack-a-mole" problem. In many cases, a Twitter user may report the account of someone who's harassing her -- and maybe even gets that account suspended -- only to find her abuser has created a new account and continues on without a hitch. By asking users to verify their phone numbers, which are at least slightly more difficult to obtain than e-mail addresses, Twitter is looking to identify those chronic rule breakers.
The company is also trying to limit how much vitriol a target of abuse sees. Doshi said Twitter is testing a product that allows the firm to identify abusive tweets by matching them against a "wide range of signals" that are common among abusive accounts. These include the age of an account or messages that bear a similarity to previous messages that have been deemed abusive by Twitter's safety team.
Armed with that information, the new tool is designed to limit the reach of those tweets by, for example, preventing them from appearing in a target's notifications timeline. (If you follow an account that's abusing you, however, you will still see that user's tweets in your timeline.)
That follows the introduction of a "quality filter" that Twitter rolled out to its verified users -- celebrities and other notable people whose identities are known to Twitter -- which is designed to automatically screen out spam and abusive tweets.
While this could be seen as a move toward greater (and in some cases welcome) censorship on the service, Twitter was very clear that it won't censure users simply for holding unpopular or controversial opinions. And the post also made reference to the company's constant struggle to filter out threatening and abusive messages without limiting free speech.
"[Our] mission is to provide a platform for free expression to flourish; for that mission to be successful, we need to ensure that voices are not silenced because people are afraid to speak up," Doshi said.
Doshi added that Twitter will monitor how its updates are working, and continue to evaluate and tweak its policies as necessary.