Twitter has a troll problem. And for a long time, it has struggled to explain to its users how, exactly, it plans to solve it.

The last big changes to Twitter’s anti-abuse plan came in November, when the company beefed up its enforcement of its “hateful conduct” policy, retrained its moderators and introduced a better “mute” feature to help filter out trolls by keyword. And while Twitter does seem to be more aggressively cracking down against accounts that violate its abuse policies since then, a central question remains: Will Twitter become more transparent about how it makes these decisions?

That lack of transparency, as critics have noted, has long frustrated those who want to see the platform to do more to combat harassment and abuse. Twitter has, over the years, developed longer and more robust rules for the platform, while attempting to balance its anti-abuse rules with open expression on the platform. Those rules remain somewhat vague, and the way they’re enforced case to case can lack consistency.

AD
AD

That inconsistency could come from the fact that many different moderators are tasked with interpreting and applying the same set of rules, quickly, to many cases over the course of a single day. Others have also noted that attention from influential Twitter users or the media can seem to make the difference between Twitter taking action or doing nothing when a user appears to violate the site’s policies.

It’s kind of hard to see how this plays out over time when examining a single case. So we’ve compiled a handful of recent instances of alleged rule-breaking on Twitter and have a challenge for you: Can you, for each of these cases, guess how Twitter’s moderators decided to apply the site’s rules?

More reading:

AD
AD