In a recent ungated article in the Journal of Democracy, we answer this question with two observations. First, social media is a tool for giving voice to those excluded from access to the mainstream media. Second, despite the fact that social-media democratizes access to information, those using it can simultaneously censor and manipulate information to try to silence others’ voices. Some of these forms of censorship — such as hindering access to information or threatening would-be opposition figures — are centuries old. Others — such as employing bots and trolls to change the online conversation — are particular to the digital age.
Taken together, these two factors — using online tools both to expand opportunities to speak up, and to expand opportunities to silence — can illuminate the complex relationship between social media and democracy. We conclude that social media itself is neither inherently democratic or nondemocratic, but yet another arena in which political actors contest for power.
A new hope: liberation technology
Let’s look first at autocratic societies. Who is excluded from the mainstream media in such countries? While it is important to remember that many forces may be excluded (including plenty that may be anti-regime but still illiberal themselves), this category undoubtedly includes pro-democratic forces.
Social media can help those opposition actors figure out how to work together, solving what political scientists call “collective action problems.” Would-be democrats can find one another, find hidden support for democracy, connect with like-minded citizens, coordinate political planning and organize direct political action such as protests — all without help from state-owned media, and at times without being detected by state surveillance.
But if pro-democracy forces can figure this out, so can the regimes they are targeting.
The empire strikes back: repression technology
When faced with online opposition, autocratic regimes have various options for countering these threats.
One way to characterize these different options is by thinking about how the social-media user experiences them. The regime can undertake offline responses, such as intimidating or arresting opposition activists, changing the ownership structure of media companies, or adjusting liability laws, which the user of social media may never see directly. The regime can also launch online responses: restricting access to content, which the user may or may not directly notice; or engaging with online content in an effort to shape the online conservation, which the user will definitely experience.
One of us (Roberts) categorizes these strategic options in her forthcoming book, “Censored: Distraction and Diversion Inside China’s Great Firewall,” as the “three Fs” of digital-era censorship. Fear is censorship through intimidation, which can include imprisonment, physical harm, loss of livelihood and so on. Friction is censorship that makes it harder to find information by removing content or slowing down access, including removing social-media posts, reordering search results or slowing down Web pages. Flooding censors opposing views by loading up the online space with pro-regime messages, or simply by adding spam or noise, thus making it harder to find the opposition’s message.
Crucially, this “flooding” — or trying to shape the online conversation — comes from new digital tools. One of these is fully automated online accounts that are controlled by algorithms, otherwise known as “bots.” Another involves humans — known as “trolls” — who either out of conviction or for pay spend a lot of time online to divert attention from regime opponents as part of astroturfing campaigns or in an attempt to create the image of widespread support for particular ideas, policies, regimes and so on. And while these tools have been skillfully used by autocratic regimes, they also can be used in — and against — democracies.
Return of the anti-systemic forces: tumultuous technology
However, it can also include fundamentally illiberal groups opposed to fundamental tenets of liberal democracy. These groups — be they white nationalists, neo-Nazis or what is loosely known as the “alt-right” — can take advantage of the same features of social media that pro-democracy forces in autocratic regimes do. They too have the ability to find like-minded people who may not be geographically proximate and can collaborate on collective political action.
Moreover, these illiberal forces can also take advantage of the very tools developed by autocratic regimes: fear and flooding aided by trolls and bots. But while autocrats apply these tools to counter online opposition to the regime, in democratic societies, illiberal actors can harness these tools to attack political opponents, supporters of democracy, and even democratic values and norms. Thus, the very openness of the Internet can be used to amplify these illiberal voices, both by making their proponents online seem more numerous than they may actually be, by driving mainstream media coverage and by driving opponents offline.
The law awakens: restricting technology?
So what comes next? One important question is whether and how democratic societies will use legal regulation to limit this emerging threat. As this debate continues to unfold, an understanding of how exactly it is that social media threatens — and supports — democracy will be crucial making sure policy changes have their desired effect. Democracies must be aware that any attempt to regulate the Internet may veer dangerously close to the censorship they deride in autocracies. For example, it is probably no accident that Russia was among the first to copy Germany’s new law threatening fines for social-media companies that fail to adequately restrict online hate speech online.
Yannis Theocharis is an assistant professor in the Center for Media and Journalism Studies at the University of Groningen.
Margaret E. Roberts is assistant professor of political science at the University of California at San Diego.
Pablo Barberá is assistant professor in the School of International Relations at the University of Southern California.
This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the Network is responsible for the article’s specific content. Other posts in the series can be found here.