The crisis that fractured the Gulf Cooperation Council last year prominently featured numerous hacks and social media bot attacks.

Late last month, an Arabic Twitter hashtag went viral in Qatar. The hashtag — translated as “anniversary of the midnight fabrications” — refers to events of May 24, 2017, when hackers acting on behalf of the United Arab Emirates compromised Qatari news and social media accounts, posting controversial comments falsely attributed to the Qatari emir. This was the first recorded example of a cyberattack provoking a protracted international crisis.

The hack was only the tip of an information war iceberg. Beginning weeks before the crisis, Qatar’s enemies recruited Western companies to help wage a relentless propaganda campaign on social media that shows little sign of abating. Key to this strategy has been Twitter bots, which can like, retweet and even post original content en masse.

What are these bots, exactly, and how do they matter for regional politics?

Gaming Twitter trends

Our research shows that Persian Gulf regimes create — or order the creation — of thousands of bots to tweet in a coordinated fashion. This time last year, one of the authors found that 17 percent of a random sample of Arabic tweets mentioning Qatar were tweeted by bots. By this May, the number had climbed to 29 percent.

Propaganda bots operating in the gulf do not attempt to engage other users directly, tending instead to focus on increasing the public salience of statements tweeted by prominent human accounts. For example, bots often amplify tweets by critics of the Qatari government or royal family, such as the leader of the Qatari opposition abroad, Khalid al Hail, or @QatariLeaks, an anti-Qatar website. Thousands of bots have been mobilized against Qatari news station Al Jazeera. Bots even promoted the pro-Saudi tweets of President Trump during his May 2017 visit to the region.

Typically, almost 90 percent of tweets in gulf crisis hashtags are verbatim retweets of what others have said. Less than 30 percent of Twitter users post original content, and among these, the top 2 percent (0.6 percent overall) are retweeted so much more than anyone else that they drive roughly 75 percent of the conversation. These elite social media influencers are often given a leg up by bots, particularly if they express anti-Qatari sentiment.

Beyond promoting particular opinions, bots are useful even more fundamentally for deciding the topic of conversation. Twitter hashtags can be started by anyone. To gain public salience, however, many other accounts must take up that same hashtag in their own tweets. With thousands of bots at your command, this becomes feasible. Once a hashtag trends, human users are often drawn into the conversation.

Hashtag manipulation is such a pervasive phenomenon in the gulf that only a month after the outbreak of the crisis, the hashtag “don’t participate in suspicious hashtags” began trending. Yet not everyone is so savvy or familiar with the regional social media context. Hashtags polluted by bots are often mistaken for genuine Qatari comments and included in pieces by news-aggregating services and other outlets. BBC trending, which documents popular social media topics across the Arab World — included a fake news story about the Qatari government reducing the salaries of Qatari soldiers.

A top Saudi official, Saud al-Qahtani, who is both an adviser to the Saudi Royal Court and the general supervisor of the Center for Studies and Information Affairs, has purposefully used trends generated by bots as a measure of Qatari public opinion. In August 2017, Qahtani said that the hashtag #LeaveTamim was trending in Qatar and that it reflected how Qataris wanted to oust their ruler Tamim bin Hamad Al Thani. In reality, the #LeaveTamim hashtag was no indication of Qatari public opinion because it was mostly generated by anti-Qatar bots and non-Qataris.

Long-term effects of bots

Much of the discourse attacking Qatar is presumably intended to address domestic constituents in the blockading countries. After all, the blockade has disrupted the lives of thousands of residents across the region who work or have family in Qatar. Thus the blockade must continually be legitimized by demonizing Qatar and demonstrating the efficacy of the blockade as an appropriate sanction.

There is also a more fundamental authoritarian logic at play. While social media may ultimately act as an incubator for political opinion formation, it is more crucially the place where citizens go to find out what other citizens think — a vital ingredient to mobilizing.

By inflating the importance and salience of specific political figures, millions of bots are drowning out citizens’ opinions with the voice of a small authoritarian elite. The use of bots to create artificial trends, and the use of such trends as an indicator of public opinion, all point to the ability of gulf regimes to co-opt social media as part of their control and censorship apparatus, undermining its radical potential. Perhaps more than anywhere, the gulf shows us how social media is being weaponized as a crucial delivery system for fake news, hate speech and propaganda.

Marc Owen Jones is a lecturer in the history of the Persian Gulf and Arabian Peninsula at the Centre for Gulf Studies at Exeter University’s Institute of Arab and Islamic Studies.

Alexei Abrahams is a research fellow at Princeton University’s Niehaus Center for Globalization & Governance at the Woodrow Wilson School of Public and International Affairs.

This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the network is responsible for the article’s specific content. Other posts can be found here.