Democracy Dies in Darkness

Debunking misinformation failed. Welcome to ‘pre-bunking’

Election officials around the world are adopting “prebunking” campaigns, as AI and other threats jeopardize voting.

(Washington Post illustration; iStock)
9 min

Election officials and researchers from Arizona to Taiwan are adopting a radical playbook to stop falsehoods about voting before they spread online, amid fears that traditional strategies to battle misinformation are insufficient in a perilous year for democracies around the world.

Modeled after vaccines, these campaigns — dubbed “prebunking” — expose people to weakened doses of misinformation paired with explanations and are aimed at helping the public develop “mental antibodies” to recognize and fend off hoaxes in a heated election year.

In the run-up to next month’s European Union election, for example, Google and partner organizations are blanketing millions of voters with colorful cartoon ads on YouTube, Facebook and Instagram that teach common tactics used to propagate lies and rumors on social media or in email.

One 50-second animation features a fake news campaign in which “visiting tourists” are blamed for a “litter crisis.” The example is meant to educate voters about “scapegoating,” a disinformation technique that places unwarranted blame for a problem on a single person or group.

Google has no plans to launch such a campaign in the United States, where former president Donald Trump and his allies are spreading falsehoods about widespread voter fraud in the 2020 election, laying the groundwork to cast doubt on the results of Trump’s rematch with President Biden in November.

Instead, humbler campaigns are springing up in locations across the nation, including Arizona’s Maricopa County, where election officials are enlisting local celebrities such as the Phoenix Suns basketball team to promote voting and explain the procedures.

Federal agencies are encouraging state and local officials to invest in prebunking initiatives, advising officials in an April memo to “build a team of trusted voices to amplify accurate information proactively.”

“Communicate early and transparently about the elections process to the American people,” said Cait Conley, an election security expert at the Cybersecurity and Infrastructure Security Agency, which has conducted dozens of practice runs with local officials that include misinformation scenarios.

The moves come after nearly a decade of floundering initiatives to stem voting misinformation, leading researchers to a sobering conclusion: It is nearly impossible to counter election misinformation once it has taken root online.

Since the revelations that Russia tried to undermine the 2016 elections by stoking divisions on Facebook and other social networks, the most prominent tactics to battle misinformation largely have been reactive. But even fact-checking social media posts has become more difficult as major tech companies pull back resources for labeling false claims about voting on their platforms.

In a year when law enforcement officials are warning that artificial intelligence could supercharge election threats, election officials say prebunking could be their best hope.

“By the time the disinformation is out there, we’re really not going to be able to convince a lot of people,” said Riley Vetterkind, a public information officer for Wisconsin’s small, bipartisan Elections Commission. “That’s why prebunking has become so much more important.”

Prebunking draws inspiration from “inoculation theory,” which was developed by the social psychologist William J. McGuire in the 1960s. McGuire posited that you could prepare people to reject a misguided argument by first exposing them to a weakened form of that argument, along with a strong refutation of it — sort of like a vaccine for the mind. Then when people encounter that argument in the wild, the theory goes, they recognize it and are less likely to fall for it.

The tactic has attracted fresh interest in recent years as a way to fight online misinformation. Sander van der Linden, a social psychology professor at the University of Cambridge who worked with Google on prebunking techniques, is among the researchers who have found promising results in experiments, including with the online game “Bad News.” In the game, users play the role of a fake news tycoon, amassing followers by exploiting people’s emotions and gaining credibility by impersonating real news sources.

Academics who study misinformation are divided over how effective such inoculation is. Teaching people to mistrust any message conveyed emotionally, for instance, could lead them to doubt true claims, too. Reliably spotting falsehoods is a complex and time-intensive skill, which probably can’t be learned just by playing an online game or watching a brief YouTube video. And even if it could, the people who are willing to learn and apply it “probably aren’t the ones you’re worried about” spreading election lies, said Gordon Pennycook, a professor of psychology at Cornell University.

“There aren’t really any actual field experiments” showing that it can change people’s behavior in an enduring way, Pennycook said.

Still, such proactive strategies saw success in Taiwan, where officials launched a campaign ahead of the January election to educate the public about the rise of AI-manipulated videos and audio. Despite a disinformation campaign linked to the Chinese Communist Party, the island elected Lai Ching-te, a candidate the Chinese government fiercely opposed.

In the United States, a polarized political environment could complicate such efforts. Court rulings have chilled collaboration between tech companies and the federal government to combat misinformation amid a conservative legal campaign, which alleges the coordination amounts to government censorship in violation of the First Amendment.

But a patchwork of initiatives is emerging as state and county election officials cobble together their own programs without significant aid from the federal government or social media companies.

In Michigan, Secretary of State Jocelyn Benson has established “voter confidence councils,” groups of faith, labor and community leaders that are given accurate voting information to spread. And many states — including crucial swing states such as Pennsylvania and Wisconsin — are maintaining fact-checking websites that aim to dispel common election-fraud narratives.

As states rush to invest in such initiatives, some researchers worry that prebunking’s transformative impact is being oversold.

Vetterkind, the Wisconsin commission’s only full-time spokesman, said he spends a disproportionate amount of time tackling claims of fraud individually. For example, he said he has responded to numerous queries about undocumented immigrants using driver’s licenses to register to vote. (In the past eight years, state officials have been made aware of just four alleged instances of election fraud related to citizenship.)

“We would like to do more,” he said. “But it becomes more of a capacity issue.”

The stakes have never been higher: New artificial intelligence tools have made it cheaper and easier to craft audio, photos and videos of events that never happened. Operatives affiliated with China are increasingly stoking controversial U.S. political issues online, joining Russia in sowing discord.

“We have to predict the narratives,” Věra Jourová, vice president of the European Commission, said during a recent visit to Washington. “Up until now, we were always in a defensive position.”

Twitter is a prime example of what has befallen efforts to control election disinformation in the United States. Before Elon Musk’s takeover, prebunking was one of several strategies the company deployed to fight misinformation, along with fact-checking conspiracy theories and labeling debunked claims. In the weeks leading up to the 2020 presidential election, Twitter placed advisories in U.S. Twitter users’ feeds saying that voting by mail is safe and that there could be a delay in announcing the election results — an effort to inoculate voters against some of the most common false claims made during the race.

But Twitter, now called X, might be less willing to take similar action this election cycle, said Edward Perez, Twitter’s former product director for civic integrity, whose job included overseeing its election policies.

Twitter has eliminated or cut drastically its curation team, a group of policy and communications experts tasked with monitoring emerging narratives in digital and traditional media that might need to be addressed by the company. In recent years, Meta also has changed its approach to labeling and debunking misinformation.

“The things that in the past made these early efforts even a possibility — they are no longer there,” Perez said. “There is a philosophical resistance to the importance of this stuff.”

In addition to its work in the European Union, Google worked with a popular local influencer to run prebunking ads ahead of Indonesia’s elections in February. But one of the company’s partners said Google has been hesitant to launch a similar effort in the United States, where Republican lawmakers including House Judiciary Committee Chairman Jim Jordan (Ohio) argue that the companies are censoring conservative viewpoints.

“It’s a political risk for them. They don’t want to get used,” said van der Linden, the Cambridge professor. If Google makes unskippable YouTube ads with warnings about misinformation, he said, “they are going to get complaints. It’s going to stir up some members of Congress.”

Election officials who were on the front lines of dispelling common fraud narratives in the 2020 election are eager to stay ahead this time around. In the tumultuous days after the 2020 vote, Republican Philadelphia City Commissioner Al Schmidt appeared on CNN to dispel “fantastical” claims on social media that the city was counting votes cast by deceased residents. Minutes later, Trump himself responded — prompting threats against Schmidt and his family.

Four years later, Schmidt is responsible for securing the 2024 elections as Pennsylvania secretary of state. He said his office is maintaining a website of common election myths and working with voter education nonprofits and the media to ensure accurate information about voting reaches a wide audience.

“It isn’t so much about going back and forth and fighting against every lie that one might see on social media,” Schmidt said in an interview. “It is a matter of being mindful of what misinformation is being spread so that we can make sure to target our messaging.”