Facebook said Thursday it would expand its efforts to scan photos and videos uploaded to the social network for evidence that they've been manipulated, as lawmakers sound new alarms that foreign adversaries might try to spread misinformation through fake visual content.
In 17 countries, including the United States, Facebook said it has deployed its powerful algorithms to “identify potentially false” images and videos, then send those flagged posts to outside fact-checkers for further review. Facebook said it’s trying to stamp out content that has been doctored, taken out of context or accompanied by misleading text.
In one of the examples Facebook shared, fact-checkers in Mexico previously identified a “false photo” of a local politician whose face had been Photoshopped onto a U.S. green card, wrongly suggesting he is a U.S. citizen. In another, a news outlet in India debunked a photo that included a caption calling the country’s prime minister the “7th most corrupted” in the world — a fact attributed to the “BBC News Hub,” which isn’t part of the BBC.
Facebook’s announcement is part of several changes designed to stop the spread of misinformation on its site two years after the 2016 election, ahead of which Russian agents created and shared divisive political messages — including photos — that reached more than 100 million U.S. users. The company has hired more employees to review content, for example, while teaming up with researchers so that they can study the role of social media in national elections.
But photos and videos represent one of the toughest challenges facing Facebook and its tech peers, given that visual content can leave such lasting impressions with users. The site’s 2 billion monthly active users upload 350 million images every day, Facebook has said, and finding and combating manipulated images and videos is tougher than it is with plain text. Social media sites must also grapple with the rise of new disinformation techniques, such as “deepfakes” — or videos that can harness the power of machine learning and artificial intelligence to make a person appear to say or do something that never occurred.
Lawmakers such as Democratic Sen. Mark R. Warner (Va.) specifically warned about deepfakes at a hearing with Facebook and Twitter last week. On Thursday, a trio of House Democrats and Republicans — led by Rep. Adam B. Schiff (Calif.), the top Democrat on the Intelligence Committee — similarly raised new alarms. In a letter, they asked the Trump administration to deploy its own resources toward combating “hyper-realistic digital forgeries.”
“We are deeply concerned that deep fake technology could soon be deployed by malicious foreign actors,” they wrote in a letter to Daniel Coats, the director of national intelligence. They urged the DNI to report “confirmed or suspected use” of deepfakes by foreign powers.
The move follows an announcement from Facebook artificial-intelligence researchers this week that they had built a machine-learning system called Rosetta to recognize text in images. The system, they said, is now being used by teams inside Facebook and its image-sharing giant Instagram to “automatically identify content that violates our hate-speech policy on the platform in various languages."
Facebook CEO Mark Zuckerberg told members of Congress in April that similar systems would be some of the most powerful ways the site could combat hate speech, fake news, discrimination and propaganda across the world. “Building AI tools is going to be the scalable way to identify and root out most of this harmful content,” he said.
The automated detection of deepfakes and similar computer-created forgeries has become something of a holy grail for many in Silicon Valley and beyond. The Defense Advanced Research Projects Agency, the military’s high-tech research arm, is leading programs designed to build forensic tools that can automatically spot clues among faked videos, including by assessing lighting, image artifacts or other inconsistencies.