But as I show in a new article, these rankings are not only rating democracy but also defining democracy for certain audiences. Commonly used ratings can tell us as much about the rater as the countries being rated.
How democracy ratings get used
There are more than 180 ratings that evaluate country performance in various issue areas, ranging from human rights to economic policy to democracy. These scorecards influence countries in significant ways, often prompting them to improve their behavior.
One of the first organizations to develop a rating system for states was Freedom House, an American nongovernmental organization. Freedom House launched its “Freedom in the World” report in 1972. Since then, the organization has reported on countries’ political rights and civil liberties — essentially, their levels of democracy — each year. In these reports, countries are given ratings on a seven-point scale and overall assessments of “free,” “partly free” or “not free.”
When Freedom House released its annual rating of countries’ freedom 45 years ago, the reverberations were swift. Freedom House archives and my own data show that U.S. political leaders quickly began citing the ratings regularly during foreign policy discussions. Daniel Patrick Moynihan, then the U.S. ambassador to the United Nations, referred to the ratings to push for a global amnesty for political prisoners at the 1975 U.N. General Assembly. Around the same time, Sens. Henry Jackson and Hubert Humphrey used the Freedom House ratings to inform their subcommittees’ work on censorship abroad and U.S. security assistance programs. Journalists routinely used the ratings to report on countries’ political systems, citing them in leading U.S. newspapers more often than other ratings of democracy.
While there are many measures of democracy with different definitions of democracy, Americans still use the Freedom House ratings more consistently than any alternative. The government employs the ratings to determine whether countries qualify to receive economic aid via the Millennium Challenge Corp., an initiative that has dispensed more than $10 billion since 2004, and to evaluate its efforts to promote democracy.
Overseas audiences also frequently dispute critical scores. Documents in the archives of Freedom House indicate that this attention began at least as early as 1974, when Portugal began lobbying to improve the score for the Azores, one of the country’s autonomous regions. More recently, Russia has been a prominent critic of the ratings.
Ratings look scientific, but they’re also subjective
Although these rankings draw considerable attention in international politics, it is not immediately obvious why certain scorecards have become trusted and widely used.
When Freedom House began issuing its reports, it used methods that many academic critics found lacking. In the 1970s, one social scientist — Raymond Gastil — essentially produced the reports and ratings “alone,” although his wife, Jeanette, provided uncredited assistance. A team took over in 1990, and today the ratings use a much more transparent methodology, though critiques remain.
Gastil himself resisted the idea that the ratings were a precise or scientific exercise. In 1990, he described his approach as relying on “hunches and intuitions” and “a loose, intuitive rating system for levels of freedom or democracy, as defined by the traditional political rights and civil liberties of the Western democracies.”
The approach Gastil developed emphasized the liberal characteristics of democracy, such as limited government, individual rights and civil liberties. Other ratings conceive of democracy in different ways. The Economist Democracy Index seeks to provide a “thicker” definition of democracy that highlights factors such as quality of governance. These differing definitions have real consequences. The Economist Democracy Index tends to score some countries in the post-Soviet region (such as Russia) higher than Freedom House does.
Why trust ratings? It’s not just about methodology.
My research suggests that the ratings were adopted by various American audiences because of how they incorporate prevailing ideas about democracy.
Broad concepts such as democracy inevitably can be defined in various ways. Audiences are more likely to adopt ratings that reflect their values. This explains why the Freedom House ratings gained traction first and primarily in the United States: They share an affinity with U.S. foreign policy in terms of how to define democracy and how to code countries. Countries aligned with U.S. foreign policy tend to receive better scores in Freedom House than in other prominent indicators, such as Polity, an alternative measure of countries’ levels of democracy that uses a 21-point scale and is commonly used by academics.
At the same time, because the ratings reflect the ideology of a powerful state, weaker states must pay attention to them. Thus, the Freedom House ratings are used and have gained considerable attention in the many states targeted by U.S. democracy assistance.
Picking which ratings to use
Scholars, policymakers and citizens all want to know how democratic countries are. In the United States, an ongoing debate considers whether or not the country is exhibiting signs of democratic erosion.
Choosing whether and which democracy rating to use requires a judgment about the underlying concept in question. To what extent does democracy depend on the quality of elections vs. factors such as political trust?
Although country rankings give the appearance of neutrality, they involve subjective decisions about how to define key concepts. The most influential ratings often are powerful precisely because they reflect the value judgments of the powerful.