International Monetary Fund managing director Kristalina Georgieva successfully defended herself to her executive board this week, denying allegations that she was involved in efforts to manipulate data for the “Doing Business” reports during her tenure as World Bank CEO.

But the details that are emerging suggest that the reported attempts to refine the data were more subtle than global headlines portrayed.

What are these global business rankings — and what exactly are the accusations against World Bank staff? And what does this incident tell us about the politics of economic numbers?

Yes, data can become political

Accusations of data manipulation commonly sketch out situations in which there is a hard, correct number that accurately captures an objective truth, until an unscrupulous individual or group decides to modify it. Yet the idea that specific numbers depict reality objectively is a problematic notion when applied to the real world.

Economic statistics are not “natural.” The definitions and measurement procedures that give rise to unemployment and inflation rates — and other metrics that help us compare the economic situation in a country — are the product of political negotiations and contestations. This makes these measurements inherently ambiguous. In almost every case, there are multiple ways to quantify these theoretical economic concepts. What ends up being measured (and what isn’t measured) often depends on arbitrary choices, which can inject a range of biases into the resulting figures.

My research with Roberto Aragão finds that this inherent “softness” of numbers lies at the heart of many charges of data manipulation. Disagreements typically don’t involve peremptory tinkering with headline figures themselves, but arise from more subtle strategies to alter the numbers through the shaping of underlying methodology and definitions influencing data collection processes, or adapting the management of financial flows to indirectly affect outcome measures. This also appears to have been the case in the ongoing World Bank “Doing Business” scandal.

World Bank rankings measured how easy it was to do business

The now-discontinued World Bank’s “Doing Business” reports ranked countries on the perceived quality of the regulatory environment in facilitating private enterprise. Drawing from annual surveys of thousands of local legal experts, the overall index aggregated information on more than 40 sub-indicators, covering dimensions such as the cost and number of procedures required to start a business, along with similar measures to obtain electricity, register property, secure credit or pay taxes. The rankings were initiated in the early 2000s by academic economists studying bureaucratic red tape as a hindrance to economic development. Over the years, the report became hugely popular among financial investors, as well as governments eager to attract international capital.

Skeptics have criticized the free-market ideological underpinnings of the purpose of the enterprise as a whole, but also point out the biases in the constantly changing underlying methodology behind such rankings. Some research, for instance, indicates that the coding of legal changes systematically favored common-law countries.

In 2018, Paul Romer, a former World Bank chief economist, suggested that the rankings may also suffer from partisan bias: Chile’s position was lower during the presidential tenure of Michelle Bachelet (a leftist government) and improved while Sebastián Piñera’s liberal-right government was in power. These patterns apparently did not result from any meaningful policy change but instead from modifications in how the Bank measured the ease of filing taxes.

How did World Bank staff allegedly influence the rankings?

The law firm Wilmer Hale released its findings last month after an external investigation into controversies over the Bank’s ranking of China, but also of Saudi Arabia, the United Arab Emirates and Azerbaijan. The report includes details of how World Bank staff allegedly went about rearranging rankings in return for political favors and institutional financial support. In none of the cases is there any indication that anyone simply modified a country’s final score or rank. Instead, the allegations revolved around meticulous methodological changes.

For instance, the Wilmer Hale report details a range of methodological adaptations that staff reportedly discussed to improve China’s standing in the 2018 “Doing Business” report: subsuming the scores of higher-performing Hong Kong, Taiwan or Macao into China’s index; using the data of only the better-scoring of the two cities surveyed in China (Beijing and Shanghai) instead of an average of the two; and, as a last resort, the possibility of boosting individual Chinese sub-scores by discarding less favorable assessments in cases in which expert opinions had differed.

But the sophistication and subtleness of these alleged efforts is not unique, our research finds. They mirror the events of other recent data incidents involving Brazilian fiscal statistics (where the government sought to adapt the IMF’s public debt calculation methodology to local circumstances), or Argentine inflation rates (where a more “patriotic” methodology to track price increases was imposed during 2007-2015). All of these incidents offer a useful illustration of how institutions can manipulate statistics in practice, even in environments of relatively high statistical capacity and de jure independence.

There are no ‘right’ or ‘wrong’ numbers

Economic statistics are much more ambiguous constructs than most people realize. The actions World Bank staff allegedly took with the data are objectionable because these moves were designed to change the ranking of specific countries — not because these actions turned objectively “right” numbers into “wrong” ones.

The broader lesson that this current incident highlights is that it can be difficult to detect data manipulation as long as we only stare at the numbers in the headlines. Going forward, greater transparency about methodological changes seems paramount to reduce the potential for statistics being bent in politically convenient ways.

But this episode may also prompt a reassessment of how we think about statistics. Typical statistics courses teach students a great deal about what to do with a spreadsheet, but almost nothing about how the data get into that spreadsheet in the first place. A better understanding of the politics of statistics production seems just as important as technical skills, in the quest for a more honest and realistic engagement with the numbers through which we see the world.

Lukas Linsi is assistant professor of international political economy at the University of Groningen in the Netherlands and a member of the FickleFormulas research group. Find him on Twitter @lukaslinsi.