A new report from the Simon Wiesenthal Center claims that digital hate speech is rampant, especially on Twitter — an illustration, perhaps, of the complexities of policing speech online.
According to the report, more than 20,000 “hate-spewing hashtags and handles” appeared on Twitter in 2012, up 5,000 from the year before. It is the 15th annual report on digital hate speech by the Wiesenthal Center, a Jewish human rights organization based in Los Angeles.
The handles include some unpleasant material: Among the accounts the Wiesenthal report flags are @malaheminspire, which the centers says is the account for the al-Qaeda magazine Inspire, and @nationsaryan, an account for the white supremacist group Aryan Nations. The report also documents a number of explicitly anti-Semitic accounts, including one with the handle @ElCazaJudios (“the Jew fighter”) and several that seem to glorify Nazi figures.
“We have found an alarmingly high incidence of digital hate and terror on Twitter specifically,” Abraham Cooper, the associate dean of the Wiesenthal Center, said in a statement.
Twitter declined to comment on the report.
But the report’s claim that hate speech on Twitter is widespread — and that the site does nothing to stop it — seems a little murkier. The marquee statistic, a 30 percent growth in hate speech in the past year, does not account for Twitter’s 60 percent growth over the same period. And perusing the report’s list of offending accounts pulls up few with more than 100 followers, suggesting these users live on the fringes of the 500 million person network.
Still, this is a criticism Twitter has heard before. In January, the platform responded to government complaints about abusive speech on the site by developing a tool called “country withheld content,” which selectively flags and blocks illegal speech. Nine governments have since flagged content through the system.
Germany became the first country to use the tool in October, when it asked Twitter to block a neo-Nazi account that’s illegal there. Twitter users outside of Germany can see the account, but German users receive a message reading, “This account has been withheld in: Germany. Learn more.”
That same month, a Jewish student group sued the site in French court for refusing to turn over the identities of the users behind anti-Semitic tweets. That attracted the attention of the country’s minister of women’s rights, among others, who penned an editorial for the newspaper Le Monde calling on “Twitter’s sense of responsibility.” According to Mashable, the hashtags #unjuifmort (“a dead Jew”), #unbonjuif (“a good Jew”) and #SiMaFilleRameneUnNoir (“If my daughter brings home a black”) all trended in France in the last two months of 2012.
Twitter has long argued that it isn’t responsible for those types of tweets, just as AT&T isn’t responsible for calls placed on its network. But that defense didn’t fly in France, where a judge ordered the site to identify the offending users or face fines of 1,000 euros per day.
It also doesn’t seem to impress the people behind the Wiesenthal report, who called on the platform to actively find and delete offending posts, the way Facebook does.
But Twitter, with its “tweets must flow ” ethos, doesn’t seem interested in the move to active moderation.
“How do you make sure you are both emboldening people to speak politically but making it OK to be on the platform and not endure all this hate speech?” Twitter chief executive Dick Costolo asked the Financial Times in a June interview. “It’s very frustrating.”