TikTok on Wednesday released a set of new, more detailed rules about the videos it permits and prohibits, seeking to respond to concerns that its policies to protect users failed to keep pace with its meteoric rise in popularity.

The updated guidelines spell out 10 categories of videos that aren’t allowed on the Chinese-owned social media app, including those that glorify terrorism, show illegal drug use, feature violent, graphic or dangerous content or seek to peddle misinformation that’s designed to deceive the public in an election, the company said.

TikTok previously maintained a set of broader community standards that banned a wide range of short-form videos that could cause users harm. But the app’s swift growth spawned questions — in the U.S. Congress and elsewhere — about how exactly the company enforced those vague rules, particularly in the United States, where free speech values differ greatly from the censorship laws that govern TikTok’s parent company, ByteDance, in China.

Adding to its woes, former employees previously told The Washington Post that they often felt pressured at TikTok to take a more conservative view about which videos to promote and which to remove from the app. While TikTok said it has retired those older rules and never has removed videos for political reasons, the company still faced criticism from users — and federal lawmakers — about a lack of transparency.

“We’re committed to being transparent about the ways we maintain the app experience users expect while providing the protections they deserve,” Lavanya Mahendran and Nasser Alsherif, who lead global trust and safety at TikTok, wrote in a blog post.

Company executives said that the new rules would be applied globally but that they would also “localize and implement” them “in accordance with local laws and norms,” raising questions about how they ultimately might be enforced around the world.

With more than 1.5 billion downloads, TikTok is racing to keep up with its fast-growing, younger-leaning fans, who post short, sometimes silly, videos of themselves and their friends. Major social media platforms including Facebook, Google-owned YouTube and Twitter long have maintained lengthy policies explaining what they allow and block, spurred on by a similar need to protect their brands — and satisfy regulators’ concerns.

But TikTok faces unique challenges because of its Chinese ties, which have fueled suspicions that it manages content in a way that satisfies Beijing’s censors. Others in Congress have questioned the privacy and security of the app itself, a concern shared by the U.S. military, where the Army and Navy have banned it from government-issued devices and the Pentagon has directed that its approximately 23,000 employees uninstall it from their phones.

In a bid to address those criticisms, TikTok has stressed its independence. In October, it retained two former members of Congress to help create a special committee of “outside experts” to “advise on and review content moderation policies covering a wide range of topics, including child safety, hate speech, misinformation, bullying, and other potential issues,” it said in a blog post.

The rules unveiled Wednesday showcase the wide range of thorny issues TikTok must adjudicate.

For example, TikTok under its old rules prohibited terrorist organizations from using the app and barred users from posting videos supporting such groups. But its application of that policy generated controversy last year, after a teenage user — whose satirical, political videos once featured the face of al-Qaeda leader Osama bin Laden — discovered she had been locked out of her account due to policy violations.

TikTok’s new rules spell out its thinking more clearly: It bars support for a wide array of terrorism and hate groups, including their logos and portraits, but exempts educational and satirical depictions, the company said.

The app also bans content depicting “drinking liquids or eating substances not meant for consumption” or that could mislead “community members about elections or other civic processes.” Content that depicts sexual arousal or nudity is also banned, but the company makes exceptions for educational or artistic purposes, such as videos “discussing or showing mastectomy scars.”

Slurs are prohibited, though TikTok said it “may give exceptions” when they’re used self-referentially or in lyrics. But music that promotes “hateful ideologies” is banned, according to the new rules.

The policies mark a dramatic shift from the content-moderation guidelines obtained last year by the Guardian and the German blog Netzpolitik. TikTok moderators were urged to suppress videos referring to political topics sensitive to the Chinese government or people with disabilities. The company said last year that the rules were an outdated and “blunt approach to minimizing conflict” on the app and pledged to “further increase transparency” around its rules.

Former employees who spoke with The Post last year also said that guidelines were regularly set by company managers in Beijing, limiting the content they could approve for viewers in the United States. TikTok’s U.S. general manager, Vanessa Pappas, said in November that the company was moving away from a “ ‘one-size-fits-all’ approach” and would give American managers more control over which content is restricted or approved.