The Washington PostDemocracy Dies in Darkness

Twitter and Instagram face backlash, boycott after anti-Semitic posts by British rapper Wiley

Wiley in London in 2017. (Ian West/PA/AP)

Twitter and Instagram are facing government scrutiny in Britain and a two-day boycott after the platforms came under criticism for their responses to posts by the rapper Wiley, a pioneer of grime music.

On Friday, the 41-year-old Wiley, whose real name is Richard Cowie, posted a succession of insults and conspiracy theories targeting Jewish people. The posts have since become the subject of a British police investigation.

Wiley’s management company said it would no longer work with the artist, who received a top honor from the government two years ago for his contributions to British music.

The “antisemitic posts from Wiley are abhorrent,” British Home Secretary Priti Patel tweeted Sunday.

“They should not have been able to remain on Twitter and Instagram for so long and I have asked them for a full explanation,” Patel wrote. “Social media companies must act much faster to remove such appalling hatred from their platforms.”

On Monday, Downing Street spokesman James Slack said that “we have set out very clearly that Twitter’s performance has not been good enough in response to the anti-Semitic comments made by Wiley and it needs to do much better,” according to the Associated Press.

Twitter removed numerous tweets that it deemed offensive but kept others online. The company temporarily locked the artist’s account, which is still visible. The original tweets were up for at least 12 hours, the AP reported.

Facebook did not immediately respond to a request for comment. The BBC reported that Instagram had removed some of Wiley’s content, too.

A 48-hour boycott of the platforms, organized by users who argued that the companies’ responses were too slow, began early Monday and is set to last through Wednesday morning. The campaign is using the hashtag #NoSafeSpaceForJewHate.

Among the boycott’s supporters are several British celebrities and politicians, including TV presenter Rachel Riley. Members of Parliament for the ruling Conservative Party and for several opposition parties, including Labour, also announced they would participate.

“Why on earth have @Twitter left up such blatant antisemitism and hatred? It hits all the dangerous beats, Jews get things you don’t get, they are in control, they think their better… This is dangerous stuff,” tweeted Labour MP Jess Phillips.

The coronavirus is exacerbating a crisis on social media. Human rights activists could pay the price.

Jewish advocacy groups and prominent religious figures vowed to support the initiative, including the chief rabbi of the United Hebrew Congregations of the Commonwealth, the British Holocaust Educational Trust and the American Jewish Committee.

“Your inaction amounts to complicity,” Chief Rabbi Ephraim Mirvis wrote in letters to Facebook chief executive Mark Zuckerberg and Twitter CEO Jack Dorsey.

“Antisemitism, or any form of bigotry, has no place in our societies or on social media,” the American Jewish Committee wrote on Twitter.

In an emailed statement, Twitter said the company is “committed to amplifying voices from underrepresented communities and marginalised groups.”

“Our Hateful Conduct Policy prohibits the promotion of violence against — or threats of attack towards — people on the basis of certain categories such as religious affiliation, race and ethnic origin,” the company said. “We enforce our rules judiciously and impartially for all and take action if an account violates our rules.”

The debate over Wiley’s posts reflects a growing controversy over how companies such as Twitter or Facebook should police their platforms.

Pressure on platforms to remove hate speech and misinformation more quickly is mounting. In recent weeks, a Facebook boycott by hundreds of major advertisers over hate speech and misinformation gained traction.

But some activists warned that speedier content moderation could have negative repercussions. Mass removal is possible only if platforms rely more heavily on automated systems, which raises the possibility of legitimate content being removed accidentally, with implications for the freedom of speech, they caution.