Facebook is spearheading a competition to find new ways to identify computer-altered videos known as “deepfakes.” But some artificial-intelligence specialists say the strategy might backfire.

Those experts say the contest will probably hasten the already accelerating arms race between the malicious actors using AI to create increasingly realistic faked videos, and the technology companies racing to detect them.

“Any algorithm used to identify deepfakes could also be used to make deepfakes better," says Rachel Thomas, a co-founder of machine-learning lab Fast.ai.

Already, top artificial-intelligence researchers across the country have been racing to defuse the computer-generated fake videos as fears grow that they could undermine candidates and mislead voters in the lead-up to the 2020 presidential election. Many fear these videos could be deployed in a similar manner to the way fake news stories and deceptive Facebook groups were used to influence the 2016 election.

While people have altered videos for as long as the technology has existed, AI software developed by Google has increased the accessibility and sophistication of deepfakes. And the tools keep advancing and growing in popularity. Last week, a Chinese app called Zao became the most popular download in China, allowing users to virtually graft their faces onto videos of actors from scenes in movies and television shows.

Detecting deepfakes is becoming significantly more difficult as the technology improves. Detection often comes down to gestures as subtle as a chin movement or a blink of an eye.

Facebook’s competition, called the Deepfake Detection Challenge, is a partnership between Facebook, the technology industry consortium Partnership on AI, Microsoft and experts from seven academic institutions. Events will begin in October and run until March. Facebook said that it has dedicated $10 million to fund the competition, which will include a not-yet-announced number of grants and awards.

Facebook said it will release the data set for the challenge later this year. The company is commissioning a “realistic data set” made up of videos using paid actors, as well as altered versions of those videos. Competitors will use those data sets to develop detection codes. Facebook will enter the challenge, too, but will not accept any monetary prizes.

Chief technology officer Mike Schroepfer said in a blog post that Facebook’s hope is that the competition will help the company accelerate its progress and create more open-source tools as it battles a “constantly evolving problem.”

Using a data set made up of videos of actors might not adequately train algorithms to detect deepfakes that depict humans in real situations, Thomas warned.

In another potential impact to the outcome of the competition, some were critical of the lack of diversity among the panel of academics who consulted with Facebook on the project. As noted by Vice, all seven of the professors quoted in the announcement are men. None is a professor of social sciences, a collection of fields that have scrutinized the human impact and possible consequences of AI technology.

Still, the Deepfake Detection Challenge received praise from Capitol Hill, where politicians have urged social media companies to enhance their efforts to ward off that type of disinformation before the election season.

Rep. Adam B. Schiff (D-Calif.), chairman of the House Intelligence Committee, said in a statement that the competition is “a promising step” by the technology community in the battle against deepfakes. With voting in the first 2020 primaries less than six months away, Schiff said social media platforms “must urgently prepare for increasingly sophisticated disinformation campaigns.”

Manipulated videos have already targeted some politicians. House Speaker Nancy Pelosi (D-Calif.) was the subject of a video earlier this year that was subtly changed to make it sound as though she were drunkenly slurring her words. Though the alterations to the video were relatively low-tech, the situation and the speed with which the content went viral online demonstrated how even minor changes to a video could be used to shape public perceptions.

When Facebook said it would not remove the Pelosi video, stating that it did not violate any of its policies, two artists countered with a more sophisticated deepfake featuring Facebook chief executive Mark Zuckerberg bragging about misusing “stolen data” from users. That video also remained on the platform.