Earlier this week, I highlighted what I believe to be “one of the most important budget graphs you’ll ever see” – a graph that shows the degree of uncertainty around predictions of future government deficits (and/or surpluses). My point: “we make a big, portentous policy mistake when we give long-range (anything over a few years) budget forecasts too much credibility.”
I was encouraged by the positive response to the post. Apparently, there’s more appreciation for forecast uncertainty than I thought, a finding that very much pleases my statistical heart.
I also heard from Charles Manski, an economics professor at Northwestern University who’s been beating this particular drum for a long time and whose work I’ve long admired. He was kind enough to answer some follow-up questions.
Q: Why do you think economic statistics and budget forecasts should be presented with explicit error bands around the central forecast?
A broad reason is that governments, firms and individuals use official statistics and forecasts when making numerous decisions. In the absence of error bands, they may misinterpret the information that the numbers provide. Naive users may incorrectly take central forecasts at face value. Sophisticated ones may be aware that the forecasts have errors but not have a sense of how large these errors may be.
As a result, the quality of decisions may suffer. For example, the Federal Reserve may mis-evaluate the status of the economy and consequently set inappropriate monetary policy. Communication of uncertainty would enable decision makers to better understand the information actually available regarding key economic variables.
Q: You’ve noted that other countries do so. Any ideas as to why they do and we don’t?
As an economist, I should be wary of citing “political culture” as a reason, but I really think this is the heart of the matter. I often give talks in Washington and in London. I have found that British government personnel and policy analysts readily accept the importance of transparent expression of uncertainty when presenting estimates and forecasts. Moreover, they act on it. The Bank of England fan charts are a great illustration (see below).
In D.C., on the other hand, I am regularly told that I am right in principle but that expression of uncertainty is politically a nonstarter. For example, I have recommended that the CBO (Congressional Budget Office) present upper and lower bounds as well as a central forecast when it scores legislation. The consistent response has been that this is not feasible because Congress does not want to hear about uncertainty.
Q: What difference do you think it would make if our statistical agencies and CBO took your (and my) advice?
Viewing the matter as an economist, I think that explicit expression of uncertainty should affect many aspects of policy formation. One reason is that it makes clear the value of flexibility, of not committing to a long-run policy prematurely. Another reason, which I have studied in my own research, is that it generates interest in policy diversification. Everyone is aware that diversification may be a reasonable private strategy when forming a financial portfolio. With no uncertainty, there would be no reason to diversify. I have argued that the same thinking should apply to public policy (see my book, Public Policy in an Uncertain World, Chapter 5). For example, we may want to diversify controversial aspects of educational and criminal justice policy in the presence of uncertainty. To cite just one possibility, given the present uncertainty about the effectiveness of alternative urban policing strategies in reducing crime and enhancing community approval, we may want to have strategies vary across cities. This connects with American federalism and the idea that the states are the “laboratories of democracy.”
Viewing the matter as a citizen, I hope that honest expression of uncertainty might temper the presently extreme tone of our political discourse. We regularly observe fruitless debates between persons or groups who hold what I have called “dueling certitudes.” Each side expresses certainty that he is right and the other side is dead wrong. If we were to face up to uncertainty, we might admit that neither side is really that certain and, hence, possibly find some common ground for constructive discussion.
Q: Generally speaking, do you think our estimates have a systematic bias one way or the other? If not — if errors are random — is it worth it to be explicit about our uncertainty?
At present we do not know whether the estimates have systematic biases in one direction or another. I think it is important to be explicit about uncertainty even if the estimates have no systematic bias. Uncertainty matters per se, even if estimates are unbiased. The ideas of policy flexibility and diversification, which I mentioned before, give two reasons.
Q: Is there a downside risk here? For example, suppose people realized that 5-year and (even more so) 10-year budget estimates are highly uncertain – might that undermine people’s faith in the numbers in a way that might do more harm than good?
I often hear this fear expressed when I give talks in Washington. I have heard it from staff at the Census Bureau and the Bureau of Labor Statistics when I have argued for presenting error bounds on their regular releases of GDP, income and employment statistics (see my recent article in the Journal of Economic Literature). And I have heard it from staff at the CBO when I have argued for expression of uncertainty when scoring legislation. I can’t prove that the fear is misplaced, but I worry that it ignores an opposing danger. That is, if government agencies continue to suppress uncertainty, they will lose credibility when people eventually wake up and recognize that official statistics and forecasts may have large errors.