“We’ve heard from users that this [notifications] is an area where people don’t feel as though they have as much control on Twitter,” Harvey said. “You’re not searching for this content, but it’s still something that’s coming in to your Twitter experience.”
With the new feature, Twitter users will be able to compile their own list of words, phrases and emoji that they don’t want to see pop up in their notifications from the network.
The feature is similar to one Instagram rolled out earlier this year that let users block comments that contained certain phrases. Twitter’s move comes after an election cycle that saw, among other troubling incidents, a prominent anti-Semitic movement on the network. The company has been repeatedly criticized for not moving quickly enough to combat harassment on its site.
Harvey, who has been at Twitter for eight years, said she knows that some users are frustrated with the pace at which the network has been addressing harassment issues. “We haven’t always moved as quickly as we would like or done as much as we would like,” she said, adding that the company is trying to make sure that it has tools to shut down harassment on the site without crossing the line into limiting speech. “We have tried to be thoughtful, to make sure we don’t have unintended and negative consequences,” Harvey said.
Twitter has struggled for years with striking the right balance between protecting open expression on the network and protecting victims of harassment, often fielding heavy criticism that it has erred too far on the side of free speech. Harvey acknowledged that there were plenty of prime examples of Twitter’s shortcomings in policing harassment on the network during this year’s election.
The addition of the mute feature wasn’t driven by the dialogue around the election itself, Harvey said, but it did underscore for Twitter how much further it still has to go. She said that Twitter has also made some changes to the way that users can report harassment on the site to better reflect its policies.
Last year, Twitter explicitly banned “hateful conduct” — which prohibits the promotion of violence, direct attacks on or threats to others on the basis of race, ethnicity and a number of other attributes. The social network has now updated the language it uses in its harassment reporting tool to reflect that policy and inform users that they can report others for that kind of behavior. Twitter has also made it easier for bystanders to report abuse — so person B can report that person C is harassing person A.
Finally, Harvey said, Twitter is continually training its staff across the globe to recognize more forms of abuse. Twitter staff members review each report of abuse to determine whether it violates the company’s policies. It’s common to see Twitter users posting incredulous screenshots of notices from Twitter’s abuse team rejecting abuse reports, even when they clearly violate Twitter’s rules. (Yes, Harvey sees them, too.)
Those oversights are upsetting, she acknowledged, and often happen because a reviewer doesn’t have the cultural context to understand why something may be offensive or abusive. Something that’s obviously offensive to an American may not be so obvious to a person born in India, and vice versa, she explained. So, Twitter will be training staff on the historical and cultural context around particular types of harassment, Harvey said, such as types of anti-Semitism, for example. She said the company will also focus more closely on keeping its staff up to date with the evolving language of hate on its network.
Harvey made clear that she knows Twitter could still be doing more to protect its users and said that she hopes the company will be able to update its tools and policies on harassment more frequently than it has in the past.
“I’m definitely not saying that we’re never going to get it wrong again, or that everything is fixed,” she said. “We will still get it wrong. But we’ll take those instances and use them to real-time course-correct.”