Sean Richey and J. Benjamin Taylor have a new book on how Google searches affect democratic knowledge, “Google and Democracy: Politics and the Power of the Internet.” I asked them questions about what they found. Richey is an associate professor of political science at Georgia State University, while Taylor is assistant professor of political science at the University of North Carolina at Wilmington.
Public intellectuals are pessimistic about direct democracy, and they are also pessimistic about the consequences of the Internet for politics. Your book is optimistic about both. Why?
We are optimistic about the utility of Google searches, which is not quite the same as being optimistic about the Internet generally. The classic complaint about direct democracy, stretching back to Plato, is that an always up-to-date, well-informed public was impossible. So, it was suggested that we need a buffer against mass ignorance to administer the state. Well, Google now provides the impossible — instantaneous access to nearly the entirety of human knowledge — to billions of users, and scholars and intellectuals have not fully grasped the amazing ramifications of that for democratic theory.
Our results show that searchers are able to successfully navigate Google search results, learn from what they click and remember that information at least a week later. When Internet access is 100 percent, the idea that the mass public cannot access all necessary information will be wrong for the first time in human history. And while political psychology research shows that knowledge alone is not enough to make a wise vote choice, knowledge is a crucial component of rational voting.
Your research examines how Google affects the ways in which people search for information on complex political questions (such as ballot initiatives). What do your experiments show?
The main point of our book is that Google functions as a sort-of de facto editor for the Internet, by showing a curated set of highly relevant sites. We derive this idea of “Google as Gatekeeper” from three main findings. First, we find that it is crucial is to understand the workings of the search algorithm, because its rankings are paramount for modern information consumption. We show that 90 percent of users never leave the first page of search results, and 40 percent only click the first suggested link. Google rankings essentially decide what the public learns, acting as a de facto gatekeeper for the Internet.
We also find that the search algorithm prioritizes mainstream information sources, and we think this is probably due to the influence of PageRank. PageRank was the initial ranking mechanism of Google and still seems to play a large part of its internal ranking system. It is determined by counting the inbound links from other websites and using that to rank all websites, and then applying that ranking to the sites containing the search terms. For example, since The Washington Post has been linked to millions of times, it has a very high PageRank, and its article on a ballot measure will appear higher in search rankings than blogs or lesser known websites’ articles on that topic that have a lower PageRank.
Because of the prior two findings — that 1) users click top ranked sites on Google, and 2) top-ranked sites tend to be mainstream sites with lots of inbound links — we find that 3) users end up clicking a lot of mainstream sites, which tend to be informative for them. In experiments conducted in the lab, on the street and online, those who used Google knew a lot more about the ballot measure than those who did not.
How do you know that these experiments tell us about the ways that people look for information in real life?
To get out of the lab, we walked around the streets of Atlanta with a laptop in one experiment, and went up to randomly selected people and asked them to research a ballot measure, just as they usually would, in exchange for some money. We copied their search history, the returns Google showed them, what they clicked on and how long they stayed on each site. We then removed the laptop and gave them a quiz on what they learned after reading about the ballot measure online. Compared to a similarly selected group who did not use Google, those who searched knew significantly more about the ballot measure after only a few minutes of reading.
In another chapter, we correlate Google Trends data on the name of the ballot measure and also its topic such as “minimum wage increase.” We find that people often search for the name and topic in the state of the ballot measure but not in other states. We also show that these searches correlate with voters not skipping the ballot measure when the vote, called “roll-off.” Roll-off can be over 10 percent on some ballot measures, and it seems that Google is facilitating information provision that makes voters comfortable enough to choose “yes” or “no.” We also use survey data and show that respondents who said they researched a ballot measure online also said they were more confident in the vote choice, and more likely to not roll-off. So, multiple methods show that average people can and do use Google to research direct democracy.
Your findings are specifically about Google, and the ways in which it presents information. You hint that social media such as Facebook may not necessarily have the same beneficial effects for people’s knowledge. Why is this so?
It comes down to the financial incentive of the company and their choices for the algorithmic process of information provision. Google’s business model is based on advertising. People use Google because it has an easy, simple interface that returns accurate information. Each search generates more advertising revenue due to the cost-per-click and cost-per-engagement funding structure of Adwords. Google needs you to trust them, so that you return and make them more money. Accordingly, the Google search algorithm has been tweaked to ensure accuracy. An unintended byproduct of their business model — an “externality” as economists might say — is that voters get highly accurate information from searches about politics.
Facebook and Twitter have different business models, and therefore end up prioritizing different things in their algorithm. To understand “fake news” and related phenomena, scholars need to get under the hood and think through the implications of how the algorithm chooses to show what to whom. For example, if the algorithm focuses on showing like-minded individuals the same news item through microtargeting, then the implication is for increased homophily and groupthink. The crucial point is that there is no “Internet politics,” because what is called the “Internet” is too vast and varied with fundamentally different processes leading to very different results. Each company’s algorithm needs to be investigated and tested separately to determine its influence on politics.
This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the Network is responsible for the article’s specific content. Other posts in the series can be found here.