Over the past week or so, the new 12-hour punishments have confused and frustrated some of the users who received them. In one instance, a trans Twitter user had their account limited after tweeting “f‑‑‑ you” at @VP, the government account of Vice President Pence.
Others speculated that the new punishment was automatically triggered by doing the Twitter equivalent of insulting a verified user to their face:
The reason this particular sort of crackdown would be frustrating, if true, has to do with context.
Twitter has long faced criticism for not doing enough to counter abuse on its platform, and so it may seem like these rules would be welcome. But many users don’t trust the company to enforce its own rules consistently. That’s because the way Twitter has handled high-profile incidents of abuse, like those targeting popular, verified users, hasn’t always matched the stated experience of users with smaller followings who also experience harassment — and they say Twitter hasn’t done nearly enough to eliminate that sort of abuse.
As far as we can tell, in this case, it is not true that Twitter is specifically protecting verified accounts from obscenities. But whether intentionally or not, the new timeouts do seem to have the effect of expanding Twitter’s enforcement of its rules to include those who tweet certain obscenities at, say, politicians or other famous people they don’t like. And this hasn’t always been the case.
A week ago, BuzzFeed noticed outrage about the new account limitations among a very different subset of users. One popular account was furious that it had been put in a timeout for, it said, using the word “retard” in a tweet. Another user believed she was being punished for supporting President Trump. An article on Heat Street concluded that the limitations must be triggered by “politically incorrect language.”
Twitter doesn’t comment on the actions it takes against specific accounts — so we can’t verify the reasons behind each of these punishments — but we do know a little bit about how the new punishment works.
The new 12-hour limitations are part of the suite of new safety measures the platform announced earlier this year, Twitter said on Thursday, as part of a more aggressive approach to dealing with its harassment and abuse problem. The accounts that trigger the punishment don’t necessarily need to be reported to Twitter first; the company itself is doing the work to identify potentially abusive accounts.
That’s different from how Twitter handles rule-breaking that results in a locked account or a permanent suspension. In those cases, the company still relies on user reports of potential abuse. Moderators then evaluate those reports to decide whether action needs to be taken in each case.
The tweets that trigger the timeout are evaluated for a variety of factors, including the overall behavior of the account in question and the context of the tweet itself. For instance, it might matter whether the Twitter user tags the target of their message in the tweet or not, and the volume of similar tweets. The company denied that the verification status of the targeted account was one of those factors, and said the new safety tools will continue to get smarter and more precise as time goes on.
Jillian York, the director for international freedom of expression of the Electronic Frontier Foundation, was cautiously optimistic about the account limitations on Thursday, in part because they still allow accounts in timeout to use the platform. But, she cautioned, the tool could still easily be “misused.” York urged Twitter, and other platforms like it, to “lay out exactly how these rules are enforced, and when” so that individual users can make more informed choices.
It’s easy to see why these 12-hour bans have upset some users who are looking to see Twitter enforce its rules more consistently, even as the company introduces more anti-abuse tools at a quicker pace. Twitter has, more and more, taken swift action against more visible incidents of harassment on the platform, particularly after the racist mob harassment of Leslie Jones in July. The company has banned several high-profile white nationalist accounts in recent months for “hateful conduct.”
Each change that Twitter has rolled out to try to do just that, to get “rid of the Nazis,” has felt overdue for those who have long asked Twitter to do more. In November, for instance, years after Twitter started to allow its users to “mute” the feeds of specific accounts, the company introduced a feature that lets you to mute tweets with certain keywords. In August, Twitter rolled out a “quality filter” that automatically filters out potentially abusive tweets from a user’s notifications (a tool that has long been available to verified users) to everyone on the platform.
But a not-insignificant subset of its users — particularly those who have experienced sustained abuse — still simply don’t trust them to handle abuse effectively. The writer Lindy West quit Twitter in January after concluding that safety was not a fight that Twitter, as a company, was going to win. “I’m personally weary of feeling hostage to a platform that has treated me and the people I care about so poorly,” she wrote in an essay about her decision.
Twitter is hardly alone among Silicon Valley’s companies in declining to discuss exactly how it enforces its rules or in writing rules that are intentionally vague. There are some decent arguments for the latter (specific rules make room for getting away with something on a technicality, for instance). But maybe, as the speculation about Twitter’s 12-hour limitations shows, there’s a case to be made for more transparency when the company suddenly tries something new.