YouTube Inc. signage is displayed before the company's new television subscription service was unveiled at the YouTube Space LA venue in Los Angeles, California, U.S., on Tuesday, Feb. 28, 2017. For $35 a month, starting sometime this spring, subscribers to YouTube TV will be able to watch the top four broadcast networks and some affiliated cable channels. Photographer: Patrick T. Fallon/Bloomberg (Patrick T. Fallon/Bloomberg News)

In announcing plans to hire many more human moderators to flag disturbing and extremist content this week, YouTube has become the latest Silicon Valley giant to acknowledge that software alone won’t solve many of the problems plaguing the industry.

YouTube, which is owned by Google, said in a blog post Monday night that it would significantly increase the number of people monitoring such content across the company next year. By 2018, Google will employ roughly 10,000 content moderators and other professionals tasked with addressing violations of its content policies. The search giant would not disclose the number of employees currently performing such jobs, but a person familiar with the company’s operations said the hiring represents a 25 percent increase from where Google is now.

The move follows reports last month in which YouTube videos surfaced showing children in disturbing and potentially exploitative situations, including being duct-taped to walls, mock-abducted and forced into washing machines, according to a BuzzFeed report. Google said it removed 150,000 violent extremist videos since June. The company also has removed hundreds of thousands of more videos that showed content that was exploiting or endangering children, the person familiar with the company’s thinking said. Some disturbing content appeared on YouTube Kids, the company’s app marketed toward children.

“I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” said YouTube chief executive Susan Wojcicki in the post. “Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.”

Google’s decision to police publishers more aggressively comes at a time when Silicon Valley companies are wrestling with how to get a handle on unwanted content across a host of areas, including violent videos appearing on Facebook Live, hate speech, terrorism and Russian disinformation campaigns attempting to thwart the political debate.

Last month, amid Capitol Hill hearings on Russian meddling, Facebook said it would hire an additional 10,000 security professionals to tackle political disinformation and other security threats. Earlier this year, the company said it would hire 3,000 more content moderators on top of the 4,500 it already had.

Google says that many of these content flaggers, many of whom are low-level contractors while others are subject-matter experts, will work to train computer algorithms to identify and thwart unwanted content. Ninety-eight percent of violent extremist videos removed are now flagged by Google’s software, up from 76 percent in August, the company said in its blog. The company’s advances in data-mining software, known as machine learning, now enable Google to take down nearly 70 percent of such content within eight hours of upload.

Google and its Silicon Valley counterparts have said they hope to train software to do most of the policing work. But the hiring spree this year demonstrates that a lot of undesirable content is slipping through the cracks and that more humans are necessary, said Paul Barrett, deputy director of the Stern Center for Business and Human Rights at New York University.

“Companies are recognizing that they are going to have to dig into their pocketbooks and pay more people if they are going to get their arms around this problem,” Barrett said. He stressed that software would be a major part of any solution. “Given the volume of material on these sites, it’s almost impossible to have the response be entirely human,” he added.

Technology platforms are not legally required to police most content posted by third parties on their services, thanks to a 30-year-old law, Section 230 of the Communications Decency Act, which grants intermediaries immunity from liability. Child pornography, however, has been treated as an exception to Section 230 and is an area that the companies aggressively police, particularly as they make forays into new services that are aimed at children.

Google’s rush to address the issues around extremist and exploitative content featuring children still leaves aside some of the thornier questions about how judgment calls will be made over other forms of unwanted content, especially political disinformation, Barrett said. In October, YouTube booted the Russian government-backed news site RT, or Russia Today, from its premium advertising program, and executive chairman Eric Schmidt said that the company was working to de-rank the sites in search results. But RT remains a major presence on YouTube, with 2.3 million subscribers. Google later told a Russian regulator that the company would not change its ranking algorithms.

“Child abuse is one topic where they feel very comfortable acting aggressively to police their terrain in a focused way,” Barrett said. “My sense is that they are going to have to get just as focused on Russia as they are on child abuse. If they don’t, we will see a repeat in 2018 of what took place in 2016.”