Last week, the U.S. government issued warnings about foreign interference and disinformation campaigns in the run-up to the 2020 presidential election. A European Commission report underlined similar foreign interference and disinformation problems in recent years. The majority of European nations have reported foreign interference since 2016, including the 2019 European Parliament and U.K. elections, as well as Poland’s presidential election rounds in June and July.

Foreign interference is only part of the story, as disinformation also comes from domestic sources. France and Italy host a Europe-wide disinformation network that causes havoc in these countries’ elections and has helped spread coronavirus disinformation, for instance.

In the United States, Facebook recently closed a fake account network it claims is connected to an ally of President Trump. A New York University study found that 25 percent of Facebook and Twitter election-related shared posts contained junk information in the months before the U.S. 2018 midterm elections. And a Knight Foundation study demonstrated how the majority of junk news and conspiracy theories on social media platforms in the United States are of domestic origin.

My research on disinformation dynamics finds a mismatch between the rapidly growing scholarship on disinformation and the even faster evolution in the dynamics of political manipulation. Much of the research in this field focuses on a very small group of Western liberal democracies — and the dynamics of target countries. But to understand disinformation from illiberal regimes, it is important to understand disinformation within illiberal regimes.

In the past two years, I’ve worked with research clusters on disinformation dynamics among and within illiberal regimes. These projects, including the Summer Institute in Computational Social Science, explored how much we can learn by investigating the dynamics of disinformation within — and among — Russia, Iran, Turkey, Saudi Arabia, Qatar and the United Arab Emirates. We dug into a vast pool of disinformation data from the 2016 and 2018 Russian elections, 2018 Turkish general and presidential elections, 2016 and 2017 Iranian elections and the 2017-2020 Saudi-UAE-Qatari information wars.

Disinformation serves as a technique for distraction

Our research on disinformation echoes other findings on how illiberal regimes use social media as a repression technology. These regimes not only use online harassment and propaganda as a form of repression, but also employ disinformation to distract citizens’ attention away from issues the government would like to ignore. During the elections in Iran and Russia, government-controlled “cyber armies” deployed organized mass disinformation as a regular method of diverting public attention from corruption cases or political scandals. Similar examples of disinformation as a regular distraction tactic can be found in Gulf politics, too.

Disinformation builds domestic support

In Turkey, Russia, Iran and the Persian Gulf, governments use disinformation to boost public support during international diplomatic crises. In some instances, the goal is to increase public awareness of a certain security issue, such as the Russian policy in Crimea. In other cases, disinformation boosts patriotic sentiments and increases citizen support for policy escalation — like the scenario in the Saudi-UAE-Qatari digital war.

Our research traced similar developments in Iran’s digital campaigns against Israel and the United States, as well as the dynamics between Russia and Turkey after the 2015 downing of a Russian fighter jet by Turkish planes and the 2016 assassination of the Russian ambassador in Ankara.

Illiberal regimes use disinformation as a control mechanism not just within the context of domestic politics, but also as part of their foreign policy. Many of these regimes, in fact, accept disinformation strategies as part of regular interactions with other nations, and either create internal counter-propaganda units or utilize their direct control over the press to quickly debunk and combat external disinformation attempts.

Combating disinformation can be like “whack-a-mole”

Disinformation methods evolve much faster than the policy responses, which means liberal democracies routinely base their information defenses on obsolete practices. For example, a recent European Commission disinformation document offered a detailed road map in five action areas based on 2016-2017 events. The State Department and the Defense Department offered similar points based on Russian election interference in 2016.

But disinformation networks continuously change to escape detection, rendering efforts to remove content and accounts a “whack-a-mole” strategy. Targeted accounts, and the platforms or the financing streams of disinformation, do not remain intact for long. Facebook, for instance, removed pages presenting Russian and Iranian “inauthentic behavior” — but since February, these networks have shifted to new accounts and content formats. Similarly, Twitter’s mass removal of Turkish, Saudi and Egyptian inauthentic networks yielded short-lived payoffs, but the networks reemerged under different accounts within weeks.

Most disinformation campaigns, in fact, have already evolved to the point where they are immune to account suspensions or the exposure of their financing networks. They can rapidly shift tactics to remain undetected. If caught, they bounce back with new accounts and form altered networks that cannot be recognized by past detection mechanisms.

Of course, fighting disinformation can carry risks. Governments have used the pretext of disinformation as a vehicle to expand control of the Internet and justify censorship. In Turkey and Russia, for instance, disinformation was a tool to boost censorship to curtail digital speech and pass bills and decrees that expanded Internet-based restrictions beyond “combating fake news.”

Liberal governments tend to view disinformation as an anomaly. Our broader research findings suggest a different approach might prove more successful — if governments view disinformation as an ongoing part of the information ecosystem and build defenses accordingly. One option is a heavier emphasis on real-time, large-scale fact-checking and fact-dissemination systems to combat disinformation at scale. A second option would be to task government agencies with boosting and coordinating national efforts to battle disinformation during critical periods such as elections or natural disasters. And a third approach might include Finnish-style long-term data-literacy drives, aimed at slowly generating national immunity against disinformation over time.

Ultimately, treating disinformation as a norm, not an anomaly, may encourage defenses and resistance mechanisms to address the growing global problem of organized information manipulation.

H. Akin Unver (@AkinUnver) is an associate professor of international relations at Kadir Has University and a nonresident fellow at Oxford University’s Center for Technology and Global Affairs.