Clint Watts is a Distinguished Research Fellow at the Foreign Policy Research Institute and Fellow at the Alliance for Securing Democracy. Tim Hwang is the program chair of COGSEC, a conference focusing on the real-world practice of investigating and thwarting online influence operations.

Manipulated videos are rapidly becoming a fixture of the 2020 election. On Aug. 30, House Minority Whip Steve Scalise (R-La.) used Twitter to share a video that was misleadingly edited to distort Democratic presidential nominee Joe Biden’s views on defunding the police. That same day, Dan Scavino, the White House social media director, tweeted a manipulated video to make it appear as though Biden had fallen asleep during a televised interview.

So far, full-fledged deepfakes — audio and video manipulated with the help of cutting-edge artificial intelligence techniques — aren’t playing a major role in online discourse. The hoax videos shared by Scalise and Scavino use relatively crude, widely available methods to create what experts refer to as “cheapfakes.” These videos require no specialized expertise and are relatively inexpensive to produce. Because these fakes are often so obvious, those who distribute them can disingenuously dismiss them as simply satire or jokes.

There are a number of plausible reasons why cheapfakes have outpaced deepfakes in the political domain. One is that, despite their crudeness, cheapfakes spread widely and can capture public debate and discourse. On pure cost-benefit grounds, fakers may opt to get more bang for their buck by using existing, proven techniques for editing and manipulating media. There are also technical reasons: a recent paper by one of us points out that sophisticated machine learning systems still require plenty of time for “training,” which can slow the production of a faked video to the point where it is no longer relevant to the rapidly moving social media conversation.

The unusual circumstances of the 2020 election mean, however, that the battle against online disinformation will not end on the night of Nov. 3. Instead, it will continue, if not intensify, into November and December as poll workers count mail-in ballots and determine the ultimate winner. During this delicate period, efforts to undermine the legitimacy of the election will be particularly dangerous.

To that end, we believe that the true danger of digital hoaxes will not come from their impact before Election Day, but during the period extending from then until the inauguration. The dreaded “October surprise,” this year, is more likely to give way to a November or even a December surprise. Actors who are both sophisticated and malicious are likely to hold their fire until after the election to begin deploying their most potent tools. Operationally, cutting-edge methods such as deepfakes are best suited to these kinds of situations: a predictable moment of public uncertainty.

Preserving trust will be critical in this environment. The collapse of public confidence in the outcome of the election — particularly in hotly contested battleground statesrisks a broader constitutional crisis. We are a long way off from the quaint politics of Florida during the 2000 Bush-Gore recount. Social media and the broader foreground of the covid-19 pandemic make this a much more dangerous context for a contested electoral outcome.

We believe that there are three important initiatives that could play a key role in helping to contain the damage as we enter the final months of 2020.

First, the public needs an early warning system that can operate as a kind of air-raid siren for signaling coordinated online disinformation efforts. Universities, online platforms and civil society organizations should be coordinating to create a unified structure for signaling to the broader public when significant media manipulation campaigns are underway. This is more than just another fact-checking initiative: the aim would be to provide up-to-date information about evolving threats and the techniques being used to spread them.

Second, we should recognize that disinformation is, at its core, a human problem, not a technological one. “Fake news” detection algorithms and institutions that are already distrusted by the public will not be able to save the day. Instead, we need to urge citizens to participate in a nationwide corps of “disinformation field medics” who will be able to monitor social media chatter in their regions and provide fast responses to counter the spread of hoaxes. Libraries, community organizations and local journalists are all groups that can play a big role in the post-election period.

Third, social media companies, recognizing that recent fakes have been widely distributed by social media influencers with large followings, could preemptively issue warnings, which they can then follow up with temporary bans or even permanent de-platforming during this delicate period. In a similar vein, Congress should take measures to discipline elected officials who spread manipulated media.

The United States has largely failed to confront the pathologies of the information ecosystem that characterized the 2016 election cycle. There are growing signs that this year’s Election Day will mark the start of a prolonged period of election insecurity. How we prepare for this challenge in the weeks remaining before the election will be crucial to shaping the fate of our nation and its democracy.

Watch Opinions videos:

Read more: