Facebook has banned users from posting computer-generated, highly manipulated videos, known as deepfakes, seeking to stop the spread of a novel form of misinformation months before the 2020 presidential election.

But the policy — first reported by The Washington Post, and confirmed by Facebook late Monday — does not prohibit all doctored videos. The tech giant’s new guidelines do not appear to address a deceptively edited clip of House Speaker Nancy Pelosi that went viral on the social network last year, prompting criticism from Democratic leaders and digital experts.

“While these videos are still rare on the Internet, they present a significant challenge for our industry and society as their use increases,” Monika Bickert, the company’s vice president for global policy management, wrote in a blog post.

The changes come as Bickert prepares to testify at a congressional hearing Wednesday on “manipulation and deception in the digital age.” The inquiry marks the latest effort by House lawmakers to probe Facebook’s digital defenses four years after Russian agents weaponized the site to stoke social unrest during the 2016 race.

The new Facebook policy would ban videos that are “edited or synthesized” by technologies like artificial intelligence in a way that average users would not easily spot, the company said, including attempts to make the subject of a video say words that they never did.

Facebook, however, will not ban videos manipulated for the point of parody or satire. And it signaled that other lesser forms of manipulation would not be outlawed either, though they could be fact-checked and limited in their spread on the site.

Such a rule would still allow the infamously altered “drunk” video of Pelosi that was viewed millions of times on Facebook last year, where her speech was slowed and distorted to make her sound inebriated. The effect was accomplished with relatively simple video-editing software. To contrast with more sophisticated computer-generated “deepfakes,” disinformation researchers have referred to these kinds of videos as “cheapfakes” or “shallowfakes.”

Drew Hammill, a spokesman for Pelosi, criticized Facebook for its approach. The company “wants you to think the problem is video-editing technology,” he said in a statement, “but the real problem is Facebook’s refusal to stop the spread of disinformation.”

Facebook acknowledged last year that its fact-checkers had deemed the doctored Pelosi video “false,” but the tech giant declined to delete it because, as a spokeswoman said, “we don’t have a policy that stipulates that the information you post on Facebook must be true.”

Nor does the policy seem to restrain other simpler forms of video deception, such as mislabeling footage, splicing dialogue or taking quotes out of context, as in a video last week in which a long response presidential candidate Joe Biden delivered to an audience in New Hampshire was heavily trimmed to make him sound racist.

Those omissions prompted sharp criticism from Biden’s campaign on Tuesday. Bill Russo, the former vice president’s 2020 spokesman, said Facebook’s handling of deepfakes is inadequate and offers only the “illusion of progress.”

“Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation,” he said, “but rather how professionally that disinformation is created.”

Hany Farid, a digital forensics expert at the University of California at Berkeley whose lab has worked with Facebook on deepfakes, similarly criticized the company’s new approach, calling it too “narrowly construed.”

“These misleading videos were created using low-tech methods and did not rely on AI-based techniques, but were at least as misleading as a deep-fake video of a leader purporting to say something that they didn’t,” Farid said in an email. “Why focus only on deep-fakes and not the broader issue of intentionally misleading videos?”

The policy does appear to address deepfake videos in which women’s faces are superimposed into pornography without their consent, seen in an increasing amount of online harassment campaigns. Such videos made up roughly 96 percent of all deepfake videos found last year, according to the research firm Deeptrace Labs.

Facebook spokesman Andy Stone said early Tuesday that the manipulated-media policy would not apply to political advertising, noting that the company has declined to send political ads to third-party fact-checkers or label ads as false. On Tuesday afternoon, however, Stone said he’d misspoken and that Facebook would remove political ads if the company determined they included highly manipulated video.

The Democratic National Committee’s chief security officer, Bob Lord, said in a statement that Facebook’s policy addresses only a narrow segment of deceptive tactics online. "This change comes up short of meaningful progress and only affects one small area of the larger disinformation problem,” he said.

Facebook and other tech firms last year sponsored a “deepfake detection challenge,” offering prize money to researchers who could deliver the most refined techniques to automatically detect manipulated videos. A set of real and manipulated videos was released to researchers last month, and the challenge is scheduled to end in March.

Siwei Lyu, the director of a computer-vision lab at the State University of New York at Albany and member of the Deepfake Detection Challenge’s advisory group, applauded Facebook’s attempts to clearly pinpoint altered media, saying that “the line drawn on user-discernible manipulated videos is operable and useful for implementing this policy.”

The language that Facebook is using to delineate its rules resembles a policy raised at a June 2019 meeting in San Francisco convened by the Carnegie Endowment for International Peace to discuss how social media platforms should deal with manipulated media ahead of the 2020 election, according to a person who was present and spoke on the condition of anonymity to discuss a private meeting. The person said there was significant debate about what degree of editing is required before something is declared misleading and whether social media companies should adopt more sweeping rules against deceptive content.

The policy’s provisions for videos “manipulated for purposes of parody or satire” could lead to thorny debates over whether a video labeled as “deceptive” was merely intended to lampoon for dramatic effect.

It’s unclear, for instance, whether the policy would ban a deepfake like the one created last year in the aftermath of the Pelosi video uproar in which Facebook chief Mark Zuckerberg appeared to gleefully celebrate his control of user data. The creator told The Post last year that the video was a form of satire and “a cautionary tale of technology and democracy.”