Singer Taylor Swift arrives at the Vanity Fair Oscar Party in Beverly Hills, Calif., on Feb. 28. (Danny Moloshok/Reuters)

“Taylor Swift and the Case of the Disappearing Reptiles” is not a Nancy Drew knockoff I would choose to read — but since the Internet seems enthralled, let’s do a little bit of sleuthing.

In the aftermath of the Kimye/T-Swift “Famous” feud, you’ll recall, critics flooded Swift’s Instagram with emoji snakes. Within a week, the snakes had all inexplicably disappeared, and some mysterious force field seemed to prevent new infestations on Swift’s page.

Outrage was imminent: Clearly Instagram was complicit in this, critics said. Swift had gotten some sort of special treatment. She’d been endowed censorship powers by the despots at Instagram.

Several stories published over the weekend even claimed that Instagram had developed some sort of “secret weapon” just for Swift. The site gave her a “unique ‘tool’” to help her “shape” her image. It sounds exciting — even malicious! — until you realize the far more mundane truth: The all-powerful apparatus described in these stories works quite a lot like … Instagram’s new comment filter.

To be clear, we don’t know for sure whether that filter explains the missing snakes, or whether it explains them entirely. Instagram has not seen fit to comment on the circumstances surrounding their disappearance. But the clues point to an automated moderation filter, and Instagram happens to have just soft-launched exactly such a feature. It is not, for the record, “secret” or “unique,” and it doesn’t work much like a weapon.

On July 6, a reporter for the site TechCrunch noticed a new, unannounced option in the settings for Instagram business accounts — in other words, accounts linked to a page on Facebook. The option, when switched on, would “block comments with words or phrases often reported as offensive from appearing on your posts.” There’s no explanation as to where that blacklist sources from, but — if Instagram’s blacklisted hashtags or Facebook’s Page Moderation tools are any indication — one presumes it is dynamic, responsive to new trends and highly intolerant of anything even remotely pornographic.

Giving businesses the ability to moderate comments that appear on their pages is not a particularly novel or controversial idea. Facebook lets page administrators not only block a list of words that they themselves select, but also set and modify a “profanity filter.” Many major news websites deploy similar automation tools in their comments sections. On Instagram, companies have resorted to third-party cleanup services, like Iconosquare and Smart Moderation. (It’s really hard to sell high-end lipstick, after all, if your lipstick ad is framed with racism.)

There are still some lingering questions here, of course, just as there are of moderation filters on any major social platform. Who is choosing the blacklisted words? How does that person or system define “offensive”? And will this tool eventually roll out to all users, where it does have the potential to stifle and/or silence speech that it finds unpleasant?

Most importantly, perhaps, why does Instagram bother, given that the most determined users always find their way around? Just look at @TaylorSwift: Her haters, not dissuaded, have simply started to spell out the word “sssnnnaaakkkeee” now.


(Instagram)

Liked that? Try these!