YouTube has started adding age restriction and warnings to some videos depicting George Floyd’s killing in Minneapolis police custody that have been uploaded by news organizations, the latest wrinkle in the complicated decisions facing social media companies on content moderation.

The video streaming social media site prompts users to sign in to their Google account to confirm they are at least 18 years old before viewing the videos, posted by outlets including The Washington Post, the New York Times and NBC News. After confirming the user’s age, a new warning reads: “The following content has been identified by the YouTube community as inappropriate or offensive to some audiences.”

YouTube’s policy states it age restricts some content that doesn’t directly violate its policy but that “may not be appropriate for all audiences.” That includes videos with violence and disturbing imagery, nudity and harmful language, among other things.

Social media companies have faced years of criticism — and occasional praise — for how they moderate the content that users post to their sites. In some cases, violent content has slipped through and been posted in real time or after the fact, like in the case of the shootings last year in Christ Church New Zealand.

Those types of concerns have helped prompt social companies including Facebook, YouTube and Twitter to re-examine their rules and, in some cases, instate stronger ones. They’ve also hired thousands of content moderators to better police their sites.

YouTube, in particular, has faced backlash for allowing what some labeled hate speech and discrimination on its site. The company updated its policies last year to prohibit a broad array of content to crack down.

YouTube has taken steps to try to clarify its policies on what is and is not allowed over the past two years, but YouTube and other social media companies say they walk a fine line of restricting content while still trying to protect free expression.

A warning was added to The Post’s video of George Floyd’s killing four days after it was posted. Warnings popped up on similar videos from other news organizations around the same time.

The Post, New York Times and NBC News did not immediately respond to a request for comment.

“With 500 hours of video being uploaded on YouTube every minute, we rely on a combination of technology and humans to review videos,” YouTube spokesperson Farshad Shadloo said in a statement. “Sometimes content doesn’t violate our policies, but may not be appropriate for all audiences.”

YouTube has a large workforce of mostly-contract content moderators who review videos on the site, especially those that have been flagged by users. The company also uses machine learning technology to find videos that violate policies.

Social media has been at the heart of the Black Lives Matter protests following George Floyd’s killing in Minneapolis. Four police officers have been charged with murder in the case.

Twitter took the unprecedented step of slapping a warning label on one of President Trump’s tweets last week, saying it violated the company’s policy against “glorifying violence.”

“These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen,” the president tweeted. “Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!”

Facebook faced significant backlash for leaving the same comments up untouched on its site. And Snapchat said it would stop promoting Trump’s account on the photo-sharing app.