washingtonpost.com
From the oil spill to the financial crisis, why we don't plan for the worst

By Richard A. Posner
Sunday, June 6, 2010; B01

The BP oil spill in the Gulf of Mexico is the latest of several recent disastrous events for which the country, or the world, was unprepared. Setting aside terrorist attacks, where the element of surprise is part of the plan, that still leaves the Indian Ocean tsunami of 2004, Hurricane Katrina in 2005, the global economic crisis that began in 2008 (and was aggravated by Greece's recent financial collapse) and the earthquake in Haiti in January.

In all these cases, observers recognized the existence of catastrophic risk but deemed it to be small. Many other risks like this are lying in wait, whether a lethal flu epidemic, widespread extinctions, nuclear accidents, abrupt global warming that causes a sudden and catastrophic rise in sea levels, or a collision with an asteroid.

Why are we so ill prepared for these disasters? It helps to consider an almost-forgotten case in which risks were identified, planned for and averted: the Y2K threat (or "millennium bug") of 1999. As the turn of the century approached, many feared that computers throughout the world would fail when the two-digit dates in their operating systems suddenly flipped from 99 to 00. The risk of disaster probably was quite small, but the fact that it had a specific and known date made it irrational to postpone any remedies -- it was act now or not at all.

Such certainty about timing is rare; indeed, a key obstacle to taking preventive measures against unlikely disasters is precisely that they are unlikely to occur in the near future.

Of course, if the consequences of the disaster would be very grave, the fact that the risk is low is hardly a good reason to ignore it. But there is a natural tendency to postpone preventive action against dangers that are likely to occur at some uncertain point in the future ("sufficient unto the day is the evil thereof," as the Bible says), especially if prevention is expensive, and especially because there is so much else to do in the here and now.

Our tendency to procrastinate is aggravated by three additional circumstances: when fixing things after the fact seems like a feasible alternative to preventing disaster in the first place; when the people responsible have a short time horizon; and when the risk is uncertain in the sense that no objective probability can be attached to it.

All these forces came together to permit the economic crisis, despite abundant warnings from reputable sources, including economists and financial journalists. Risky financial practices were highly profitable, and giving them up would have been costly to financial firms and their executives and shareholders. The Federal Reserve and most academic economists believed incorrectly that in the event of a crash, remedial measures -- such as cutting interest rates -- would be enough to jump-start the economy. Meanwhile, depending on how they were compensated, many financial executives had a limited horizon; they were not worried about a collapse years down the road because they expected to be securely wealthy by then. Similarly, elected officials have short time horizons; with the risk of a financial collapse believed to be low, and therefore a meltdown unlikely in the immediate future, they had little incentive to push for costly preventive measures.

This in turn discouraged the appointed officials of the Federal Reserve and other regulatory agencies from taking such measures. "We've never had a decline in housing prices on a nationwide basis," Ben Bernanke, then chairman of President George W. Bush's Council of Economic Advisers, said in 2005. It happened the next year.

Finally, with no reliable probability estimate of a financial collapse available, it seemed natural and perhaps even sensible to wait and see, hoping that with the passage of time, at least some of the uncertainty about risks to the economy would dissipate.

The BP oil leak reveals a similar pattern, though not an identical one. One difference is that the companies involved must have known that in the event of an accident on a deepwater rig, prompt and effective remedies for an oil leak would be unlikely -- meaning that there was no reliable alternative to preventing an accident. But the risk of such an accident could not quantified, and it was believed to be low because there hadn't been many serious accidents involved in deepwater drilling. (No one knew how low; the claim by BP chief executive Tony Hayward that the chance of such an accident was "one in a million" was simply a shorthand way of saying that the company assumed the risk was very small.)

But other forces were similar in the leak and the financial crisis. If deepwater oil drilling had been forbidden or greatly curtailed, the sacrifice of corporate profits and of consumer welfare (which is dependent on low gasoline prices) would have been great. The regulators who could have insisted on greater preventive efforts were afflicted with the usual short horizons of government officials. Elected representatives did not want to shut down deepwater drilling over an uncertain risk of a disastrous spill, and this reluctance doubtless influenced the response (or lack of it) of the civil servants who do the regulating.

The horizon of the private actors was foreshortened as well. Stockholders often don't worry about the risks taken by the firms in which they invest, because diversified stock holdings can help insulate them. Managers worry more, but they are not personally liable for the debts of the firms they oversee and, more important, the danger to their own livelihood posed by seemingly small threats is not enough to discourage risk-taking. It seems that no one has much incentive to adopt or even call for safeguards against low-probability, but potentially catastrophic, disasters.

Two final problems illuminate our vulnerability to such risks. First, it is very hard for anyone to be rewarded for preventing a low-probability disaster. Had the Federal Reserve raised interest rates in the early 2000s rather than lowering them, it might have averted the financial collapse in 2008 and the ensuing global economic crisis. But we wouldn't have known that. All that people would have seen was a recession brought on by high interest rates. Officials bear the political costs of preventive measures but do not receive the rewards.

The second problem is that there are so many risks of disaster that they can't all be addressed without bankrupting the world many times over. In fact, they can't even be anticipated. In my 2004 book "Catastrophe: Risk and Response," I discussed a number of disaster possibilities. Yet I did not consider volcanic eruptions, earthquakes or financial bubbles, simply because none of those seemed likely to precipitate catastrophes.

In principle, all disaster possibilities should be ranked by their "expected cost" -- roughly speaking, by multiplying the dollar consequences of the disaster if it occurs by the probability that it will occur. If Disaster A would cause a loss of $1 trillion, and the annual probability of it occurring is 1 percent, then its expected annual cost is $10 billion. (That means we wouldn't want to spend more than that each year to prevent it.) And suppose Disaster B would exact $100 billion in damage, and its annual probability of occurring is 5 percent. That is a higher probability, but the expected cost -- $5 billion -- is only half as great, so we should spend less trying to prevent it.

It would be nice to be able to draw up a complete list of disaster possibilities, rank them by their expected cost, decide how much we want to spend on preventing each one and proceed down the list until the total cost of prevention equals the total expected cost averted. But that isn't feasible. Many of the probabilities are unknown. The consequences are unknown. The costs of prevention and remediation are unknown. And anyway, governments won't focus on remote possibilities, however ominous in expected-cost terms.

A politician who proposed a campaign of preventing asteroid collisions with Earth, for example, would be ridiculed and probably voted out of office. Yet, planetary scientist John S. Lewis has estimated that there is a 1 percent chance of an asteroid of one or more kilometers in diameter hitting the Earth in a millennium, and that such a hit would probably kill on the order of 1 billion people. That works out to 10,000 deaths per year, far exceeding the annual deaths from airplane crashes.

There are many stubborn obstacles to effective disaster prevention, and I do not expect them to be solved. We must brace for further crises, magnified by increases in world population (meaning more potential victims) and by the relentless march of technology, whether in oil extraction or financial speculation.

After all, it's only in the movies that we send deep-sea oil drillers to blow up asteroids -- and watch them succeed.

Richard A. Posner is a judge on the U.S. Court of Appeals for the 7th Circuit in Chicago and a senior lecturer at the University of Chicago Law School.

View all comments that have been posted about this article.

© 2010 The Washington Post Company