(Dado Ruvic/Reuters)

Everyone knows that Facebook has rules. But do you know how they’re enforced?

Facebook’s Community Standards contain several restrictions on what kind of content it allows on the platform, covering threats, harassment, nudity, violence and more. But how Facebook actually applies these policies in individual situations isn’t always easy to understand.

Companies such as Facebook rely on a large pool of overseas workers to make split-second decisions about reports of objectionable content on their platforms. It’s a process that Adrian Chen looked at in great depth in 2014, when he traveled to the Philippines to see firsthand how U.S. social media content is moderated.

And while we’re learning more and more about the great amount of labor it takes to moderate Facebook, Twitter and other similar platforms, the rules governing allowed and banned content on these sites remain vague — sometimes intentionally so.

Reddit’s chief executive recently called this approach to rulemaking “specifically vague.” It outlines the contours of what isn’t and is allowed, while making room for the companies to apply these rules at their own discretion. Facebook’s rules have been tweaked and expanded over the years, but they still contain this same vagueness.

Below, we’ve provided several real examples of images, videos and posts that Facebook has encountered recently. Your job is to guess how the company responded to them.

Feel free to read over their policies beforehand if you’d like, but please don’t cheat and look up these examples before answering, because what would even be the point.

Good luck!


Liked that? try these: