The Washington Post

Farrow: Why aren’t YouTube, Facebook, and Twitter doing more to stop terrorists from inciting violence?

Social media companies already screen for child porn. They should police material that drives ethnic conflict, too.

Ronan Farrow, an attorney and a former State Department official and UNICEF Spokesperson, is a reporter who currently anchors Ronan Farrow Daily weekdays on MSNBC.

One ISIS video features alleged Chilean jihadist Abu Safiyya. (AFP PHOTO / ISIS)

“The graves are only half empty; who will help us fill them?” Twenty years ago, that rallying cry on Rwandan radio helped explode ethnic enmity into one of history’s worst atrocities. In today’s Iraq, another vicious conflict between a formerly-empowered ethnic minority and a long-subjugated majority is causing the deaths of thousands. At its heart is another mass media appeal to bloodlust on radio’s modern-day equivalent: social media. And this time, the world may have a chance to stop what it failed to in Rwanda.

The Sunni Islamic State insurgents, now locked in a deadly struggle with Iraq’s Shiite majority, excel online. They command a plethora of official and unofficial channels on Facebook, Twitter, and YouTube. “And kill them wherever you find them,” commands one recent  propaganda reel of firefights and bound hostages, contorting a passage from the Koran. “Take up arms, take up arms, O soldiers of the Islamic State. And fight, fight!” adds another, featuring a sermon from the group’s leader, Abu Bakr al-Baghdadi. The material is often slickly produced, like “The Clanging of Swords IV,” a glossy, feature-length film replete with slow-motion action scenes. Much of it is available in English, directly targeting the recruits with Western passports that have become one of the organization’s more dangerous assets. And almost all of it appeals to the young: Photoshops of Islamic State fighters and their grizzly massacres with video game-savvy captions like, “This is our Call of Duty.”

But officials at social media companies are leery of adjudicating what should be taken down and what should be left alone. “One person’s terrorist is another person’s freedom fighter,” one senior executive tells me on condition of anonymity. Making that call is “not something we’d want to do.”

And so official Islamic State accounts often remain on Twitter for weeks and accumulate tens of thousands of followers before being taken down. A few propaganda videos have been removed from YouTube for “violating YouTube’s policy on shocking and disgusting content,” but countless others remain, including a last weekend’s sermon by Baghdadi, posted by an account claiming to be Islamic State-affiliated and carrying more than 12,000 views. (Rather than direct traffic to those accounts, I decline to link to it here.)

There are legitimate free-speech questions here: What about reporting on propaganda? What about peaceful lectures by otherwise violent terrorists? But those grey areas don’t excuse a lack of enforcement against direct calls for murder, which these companies supposedly ban. “I understand there are freedom of speech concerns, but I don’t think that describes what’s going on with much of the content on YouTube,” says Evan Kohlmann, a counter-terrorism analyst with Flashpoint Partners and NBC News. “No one’s suggesting they remove all journalistic clips… This is about extremely explicit content, calling for violence.”

Another objection is practical. There’s simply too much content to monitor, and too many openings for it to come back when quashed. An executive at one major social media company described it as the “whack-a-mole” phenomenon—take down one video, it springs up elsewhere. But flawed enforcement shouldn’t excuse inaction any more than it did in Rwanda 20 years ago, when the U.S. government deemed jamming solutions too legally complex, too expensive, too impractical. The perfect, then as now, was the enemy of the good.

More troubling still is the fact that these companies already know how to police and remove content that violates other laws. Every major social media network employs algorithms that automatically detect and prevent the posting of child pornography. Many, including YouTube, use a similar technique to prevent copyrighted material from hitting the web. Why not, in those overt cases of beheading videos and calls for blood, employ a similar system?

They don’t. Indeed, Twitter, YouTube and Facebook all say they strictly refuse to police content themselves—instead relying on third parties, mostly users around the world, to flag objectionable content. But the constant torrent of new content is not a burden that can be practically managed by the crowd—any more than companies expect users to serve as the prime monitor for child pornography.

As always, beneath legitimate practical and ethical concerns, there is a question about the bottom line. Section 230 of the Telecom Act of 1996 inoculates these companies from responsibility for content that users post—as long as they don’t know about it. Individuals involved in content removal policies at the major social media companies, speaking to me on condition of anonymity, say that’s a driving factor in their thinking. “We can’t police any content ourselves,” one explains. Adds another: “The second we get into reviewing any content ourselves, record labels say, ‘You should be reviewing all videos for copyright violations, too.’”

Yet past is prologue. The world, with each lamentation of “never again,” has cursed its failure to stop Rwanda’s deadly broadcasts. A furious Samantha Power once complained that the United States “refused to use its technology” when it could have. And a Harvard study found that jamming the broadcasts could have saved tens of thousands of lives.

The Islamic State’s campaign of incitement is “definitely reminiscent of Rwanda,” John Prendergast, a former Clinton administration official focused on Africa who’s studied Rwanda for years, tells me. Then, as now, exploiting sectarian hatred can quickly turn deadly on a massive scale. And, then as now, cracking down on the calls to kill is no panacea – but it can help.

These companies have a moral obligation to do more. And U.S. law should not create a legal barrier for them to act when lives are on the line. The current regime—enforced ignorance and half-measures—may be among our apologies when we recite Iraq’s “never again”s.

Show Comments
Most Read
Next Story
Daniel W. Drezner · July 10, 2014

To keep reading, please enter your email address.

You’ll also receive from The Washington Post:
  • A free 6-week digital subscription
  • Our daily newsletter in your inbox

Please enter a valid email address

I have read and agree to the Terms of Service and Privacy Policy.

Please indicate agreement.

Thank you.

Check your inbox. We’ve sent an email explaining how to set up an account and activate your free digital subscription.