Why can’t policymakers deal with uncertainty?
As a rule, policymakers tend to like hard numbers and definitive forecasts. We’ll get X amount of growth next year. This budget will produce Y amount of revenue. It’s a nice, neat way of dealing with the world. And yet, all too often, the official predictions turn out to be badly misleading.
“The untold story of this recession,” writes Justin Wolfers of the Wharton School, was the “many false signals given by U.S. GDP data which have given false hope, leading [to] policy mistakes.” Overly rosy GDP estimates back in early 2009 led the White House to misjudge the severity of the downturn and fail to plan for the worst. Last month’s encouraging GDP numbers seemed to ease politicians out from crisis mode, even though it’s well-known that these numbers go through several revisions. It shouldn’t have been a total surprise when, Tuesday, we learned that GDP growth was less robust than thought. Yet such changes often do come as a shock.
That’s why some experts suggest we need to completely rethink the way we treat economic data. Charles Manski, an economist at Northwestern University, argues that forecasters and agencies need to be much more honest about the uncertainty inherent in their projections. (See his longer NBER paper on the topic here.) Too often, Manski says, relying on nice, neat estimates can lead policymakers astray.
Take the Congressional Budget Office. Whenever the CBO scores a bill, it provides a single and definitive-sounding estimate of the policy’s budget impact. Last year, CBO director Douglas Elmendorf told Congress that the Affordable Care Act would reduce deficits by $138 billion between 2010 and 2019. There was no range of estimates. No margin of error. The forecast was $138 billion, and that was that. And yet, as economist Alan Auerbach has noted about the difficulties in scoring the effects of tax changes on revenue, “in many instances, the uncertainty is so great that one honestly could report a number either twice or half the size of the estimate actually reported.”
And so, last December, Manski visited the CBO to argue that the agency should be more open about its predictions. Perhaps it should include a range of possible outcomes. Or provide probabilistic forecasts. Or offer a “low” score and a “high” score of a bill. But Manski’s ideas were met with skepticism. As he told me in a phone interview, CBO officials argued that emphasizing uncertainty would only irk members of Congress. Lawmakers want answers, not fuzziness. Manski recounts an old story about President Lyndon B. Johnson browbeating an economic adviser who was trying to cavil and hedge. “Ranges are for cattle,” Johnson exclaimed. “Give me a number.”
Curiously, Manski notes, this aversion to uncertainty doesn’t seem to be some irrevocable feature of human psychology. In a 2004 paper for the journal Econometrica, Manski found that most Americans felt perfectly comfortable making probabilistic forecasts about their own lives: what their income would be in the future, what Social Security benefits they could expect to receive.
As it happens, not every country tends to be so definitive in its economic statements. Manski notes that in the United Kingdom, the Bank of England publishes its inflation forecasts as a fan chart, encompassing a wide range of outcomes. British government agencies have to issue upper and lower bounds for the costs of their various proposals. Then again, even if U.S. agencies did more of this sort of hedging, would anyone notice? After all, the Federal Reserve’s survey of professional forecasters, which publishes quarterly economic estimates, publishes its wide range of estimates, but that uncertainty rarely makes it into media stories.
“Here in the U.S., we just seem to want one hard number,” Manski sighs. “Maybe it’s because we just to figure out who the winners and losers are with every policy. But a lot of times you can’t figure these things out, and people don’t want to face up to that fact.”