A flag of the Islamic State. An effort to remove terrorist imagery from the Internet has been proposed by a U.S. nonprofit, but tech companies are wary it could expunge journalistic images. (JM Lopez/AFP/Getty Images)

President Obama suggested that extremist information spread online inspired a Florida man to commit the deadliest mass shooting in U.S. history at a gay nightclub in Orlando last week — the latest in a long line of terrorist attacks in which Islamist propaganda played some role in radicalizing the assailant.

Now a Dartmouth College researcher and a nonprofit group say they have created a technology that can help Internet companies instantly detect images and videos generated by terrorists and their supporters and remove them from their platforms.

It is, they say, a way to cleanse popular online sites of gory videos and propaganda from the Islamic State, which is also known as ISIS and Daesh, that can serve to incite and inspire people to commit acts of violence.

“If you could search out the beheading videos or the picture of the ISIS fighter all in black carrying the Daesh flag across the sands of Syria, if you could do it with video and audio, you’d be doing something big,” said Mark Wallace, chief executive of the Counter Extremism Project (CEP), a nonpartisan policy group. “I believe it’s a game-changer.”

The White House has signaled its support. “We welcome the launch of initiatives such as the Counter Extremism Project’s National Office for Reporting Extremism (NORex) that enables companies to address terrorist activity on their platforms and better respond to the threat posed by terrorists’ activities online,” said Lisa Monaco, Obama’s assistant for homeland security and counterterrorism.

But a number of major social-media companies are wary of the idea. They say that there is no consensus in the United States or globally on what constitutes a terrorist image, and that they might end up expunging material posted by researchers or media organizations. And, they say, once a database is created, governments around the world will place additional data requests on them — and some countries will probably demand the removal of legitimate political content under the guise of fighting terrorism.

“As soon as governments heard there was a central database of terrorism images, they would all come knocking,” said one tech industry officer, who, like other representatives in the field, spoke on the condition of anonymity because the firms are privately discussing how to move forward. “People aren’t aware of the demands that are placed on tech companies from governments like China, Russia, Saudi Arabia and Turkey.”

Google and Twitter, in par­ticular, have expressed doubts about the efficacy of such a project, industry officials said. Social-media firms now remove terrorist content when they are alerted to it, as spelled out in their terms of service. But they do not generally scour their platforms for it.

Some firms also fear that if they collaborated with a third party such as CEP, the organization might try to influence the companies’ guidelines regarding extremist content. CEP has briefed senior officials at the White House and the Department of Homeland Security on the issue.

Monika Bickert, Facebook’s head of global content policy, said, “We are always interested in finding ways to better serve our community, and we are exploring with others in industry ways we can collaboratively work to remove content that violates our policies against terrorism.”

Wallace is working with Hany Farid, a Dartmouth computer science professor and senior adviser to CEP, who developed the technology. It works by creating a distinct digital signature or “hash” for each image, video or audio track. The idea is to create a database of hashed content that Internet firms can use in automated fashion to vet images uploaded to their platforms. If there’s a match, the company can determine whether it violates its terms of service and should be taken down.

Farid said he can envision a system of scoring content according to type or degree of violence, so that a company has discretion in what to pull. But once a firm decides an image or video is objectionable, if it is encountered again, it can automatically be removed, he said.

Wallace said he thinks that the companies will come around. He cites their cooperation with the National Center for Missing and Exploited Children (NCMEC, pronounced “nickmick”) in promptly removing child-pornography images from their platforms.

Wallace envisions NORex working much as NCMEC does, only it would house hashed terrorist images and videos instead of child-porn content. It would take advantage of the 1,000 or so images that CEP already has hashed, he said.

Farid also developed the child-porn-detection technology, which is called PhotoDNA, with Microsoft and in partnership with NCMEC. Firms have been using it for about five years. PhotoDNA, which is owned by Microsoft, “sits at the pipe of a Facebook or Microsoft or Google and analyzes every image” that’s uploaded, Farid said.

The technology does not define extremism. The companies and the group running the database would, he said. “Those are where the hard questions are going to be asked,” Farid said. “What constitutes and does not constitute hate speech and calls to violence? And what is dangerous, and what is simply dissent?”

Farid acknowledged that the technology is a “double-edged sword.” Just as it can be used to find child-abuse images, it can be used to detect videos of protesters, for instance, he said.

To allay such fears, he said, the licensing of the technology can be controlled to “narrowly define” its permitted use. NCMEC, for example, restricts Photo­DNA’s use to detecting child exploitation.

But tech firms see a number of key distinctions between child pornography and terrorism cases. For one thing, possessing or sharing child-porn images is a federal crime, regardless of the reason for doing so. And by law any Internet company that becomes aware of such images must report them to NCMEC.

Federal law also gives clear guidance on what constitutes child pornography. The images must be of children apparently younger than 18 engaged in sexually explicit conduct, which is defined at length in the law and has been further interpreted by courts.

Meanwhile, with terrorism content, “not only is it not illegal, it’s not defined,” a second tech-industry officer said. A beheading video might seem like a clear case. But what about the Islamic State’s black flag?

The gruesome video of the Islamic State’s burning to death a Jordanian pilot last year was removed by Facebook, which considered it a violation of its terms of service. But it also galvanized the Jordanian people against the terrorist group in a way no other event had.

“Removing content is not cost-free for society — many people are trying to use speech about terror groups to raise awareness about atrocities and mobilize people to care and take action to stand up against terrorism,” the second industry official said.

Wallace recognizes that there will be disagreement about what constitutes terrorist speech. “My attitude is let’s have that discussion,” he said.

There is also debate about just how effective content removal is in disrupting radicalization and recruitment. Seamus Hughes, deputy director of George Washington University’s program on extremism, said in most cases people are influenced toward extremism by “real-life relationships” with friends and family. His belief is that the Internet does not play a significant role in radicalization.

“It’s really more of an accelerant than it is a starter,” he said.

But, he said, taking down content can make an Islamic State recruiter’s job harder. “If they have to spend their time figuring out how to keep their messages up and repost links, then they can’t spend time creating more and interesting new videos,” he said.