A member of the Islamic State waves a flag in the Syrian city of Raqqa on June 29. (Reuters)

In a scarily specific escalation of ISIS’s (already extreme) social-media “strategy,” a Twitter account affiliated with the terrorist group issued a string of threats against Twitter employees on Monday, urging “lone wolves” to “assassinate” employees of the social network in Europe and San Francisco.

The account — which, according to Vocativ, belonged to an ISIS-linked group called Al Nusra Al Maqdisia — was promptly suspended by Twitter, as countless ISIS accounts have been. In fact, within hours of the threatening tweets, Twitter had disabled the offending accounts and launched “an official investigation” into the subject.

“Our security team is investigating the veracity of these threats with relevant law enforcement officials,” a spokesperson told Mashable.

The incident demonstrates two things, besides the obvious hatefulness of ISIS jihadis: First, social media remains important — indeed, integral — to their cause, to an extent that they’d make threats, even a hashtag, on the site. (That hashtag, per Vocativ, translates as #Attacking_Twitter_Employees.) Second, Twitter does have the technical ability, and the resources, to respond to serious threats. It just seems to deploy that response more quickly when its own employees are involved.

Not all threats are created equal, of course. Both Twitter and law enforcement understand that, even if beleaguered users of the platform sometimes don’t: There’s something substantially different between a vague, trolling taunt from an anonymous stranger (“I’m going to kill you”) and a specific, credible threat from an organization known for killing people. (“Every Twitter employee in San Francisco in the United States should bear in mind and watch over himself because on his doorstep there might be a lone wolf assassin waiting,” read one ISIS tweet.) Twitter has long said it prioritizes the latter type of threat, and even includes specific fields for it on the site’s abuse reporting form.

But users have long complained that the site takes too long to deal with both kinds of threats: the “merely” harassing, sure, but also the credible ones, as well.

Several months ago I spoke to a woman who received explicit, graphic rape and death threats through Twitter, some of which included her home address. But even after she reported them to the platform, Twitter took days to “verify” her complaints and further time to actually disable the threatening accounts. That meshes with complaints from users like Jaclyn Munson, who wrote that she “slept with the lights on” after receiving Twitter threats that the platform took days to act on.

Initially, neither woman approached local police, some of whom — unfortunately — have acquired a reputation for dismissing online threats. If they had, they may have encountered a response like the one writer Amanda Hess did last year: After reporting threats from a man who “did 12 years for ‘manslaughter’” for killing “a woman just like you,” Hess was asked to explain what Twitter was to a confused police officer.


A screenshot from a recent Twitter conversation; language redacted. (Twitter)

Those scenes of helplessness, of fear, of confused abandonment by any and all authority figures, contrast sharply with Twitter’s response in this case: accounts suspended immediately, investigation launched, “relevant law enforcement officials” contacted by Twitter itself. Of course, Twitter is acting as both a social network and a corporate entity, in this case, so some differences can be expected. Twitter probably doesn’t have to work as hard to verify the threats, for instance, since they’re directed right at the people who typically do that kind of thing.

Still, the incident proves that Twitter’s support team can act quickly, and decisively, on serious abuse reports — when it wants to. That just doesn’t seem to always be the case.