Social media sites such as Twitter and YouTube would be required to report videos and other content posted by suspected terrorists to federal authorities under legislation approved this past week by the Senate Intelligence Committee.

The measure, contained in the 2016 intelligence authorization, which still has to be voted on by the full Senate, is an effort to help intelligence and law enforcement officials detect threats from the Islamic State and other terrorist groups.

It would not require companies to monitor their sites if they do not already do so, said a committee aide, who requested anonymity because the bill has not yet been filed. The measure applies to “electronic communication service providers,” which includes e-mail services such as Google and Yahoo.

Companies such as Twitter have recently stepped up efforts to remove terrorist content in response to growing concerns that they have not done enough to stem the propaganda. Twitter removed 10,000 accounts over a two-day period in April.

Although officials are generally pleased to see such accounts taken down, they also worry that threats might go unnoticed.

“In our discussions with parts of the executive branch, they said there have been cases where there have been posts of one sort or another taken down” that might have been useful to know about, the aide said.

The bill, passed in a closed session Wednesday, is modeled after a federal law — the 2008 Protect Our Children Act — that requires online firms to report images of child pornography and to provide information identifying who uploaded the images to the National Center for Missing and Exploited Children. The center then forwards the information to the FBI or appropriate law enforcement agency.

Google, Facebook and Twitter declined to comment on the measure, but industry officials privately called it a bad idea. “Asking Internet companies to proactively monitor people’s posts and messages would be the same thing as asking your telephone company to monitor and log all your phone calls, text messages, all your Internet browsing, all the sites you visit,” said one official, who spoke on the condition of anonymity because the provision is not yet public. “Considering the vast majority of people on these sites are not doing anything wrong, this type of monitoring would be considered by many to be an invasion of privacy. It would also be technically difficult.”

National security experts, meanwhile, said the measure makes sense.

“In a core set of cases, when companies are made aware [of terrorist content], there is real value to security, and potentially even to the companies’ reputation,” said Michael Leiter, a former director of the National Counterterrorism Center, now an executive vice president with Leidos, a national security contractor. “Rules like this always implicate complex First Amendment and corporate interests. But ultimately this is a higher-tech version of ‘See something, say something.’ And in that sense, I believe that there is value.”

Under the bill, which is expected to be filed as early as Monday, any online company that “obtains actual knowledge of any terrorist activity . . . shall provide to the appropriate authorities the facts or circumstances of the alleged terrorist activity.”

The terrorist activity could be a post, a tweet, an account, a video or a communication with someone, the aide said. The attorney general would designate the authority to be notified.

Social media sites generally do not monitor their sites for terrorism or any other content except child porn. Rather, they rely on users to flag material that appears objectionable or violates the companies’ terms of service and abuse polices.

The bill does not require companies to remove content. But the major social media sites have policies that bar threats of terrorism or the promotion of terrorism and say they remove such content when they are made aware of it. Some, including Facebook, explicitly bar known terrorist groups from posting any content on their platforms.

The provision was prompted in large part by the case of Lee Rigby, a British soldier who was stabbed and hacked to death in 2013 on the streets of London by two men wielding knives and a cleaver. One of the men, it later emerged, had had an exchange on Facebook with a foreign-based extremist with links to al Qaeda in which he said: “Let’s kill a soldier.”

Facebook apparently was not made aware of the post before the attack.

The committee aide said the measure presents “a pretty low burden” to companies, who would have to report only activity that has been reported to them. “We have heard from federal law enforcement that it would be useful to have this kind of information,” he said.

But civil liberties advocates said they fear that if the legislation becomes law, it would chip away at the First Amendment.

“The intelligence bill would turn communications service providers into the speech police, while providing them little guidance about what speech they must report to the police,” said Gregory Nojeim, senior counsel for the Center for Democracy and Technology. “If it becomes law, their natural tendency will be to err on the side of reporting anything that might be characterized as ‘terrorist activity’ even if it is not. And their duty to report will chill speech on the Internet that relates to terrorism.”