washingtonpost.com  > Print Edition > Sunday Sections > Sunday Outlook

What Percent Is 'Slam Dunk'?

Give Us Odds on Those Estimates

By Michael Schrage
Sunday, February 20, 2005; Page B01

The controversial decision to reorganize America's sprawling intelligence establishment has set in motion the most sweeping bureaucratic change for sensors, spies and satellites since the end of World War II. Unfortunately, the odds are excellent that this multibillion dollar structural shuffle -- capped last week by the appointment of veteran diplomat John Negroponte as the new national intelligence director -- will do little to improve the quality of intelligence analysis for this country.

Why? Because America's intelligence community doesn't like odds. Yet the simplest and most cost-effective innovation that community could adopt would be to embrace them. It's time to require national security analysts to assign numerical probabilities to their professional estimates and assessments as both a matter of rigor and of record. Policymakers can't weigh the risks associated with their decisions if they can't see how confident analysts are in the evidence and conclusions used to justify those decisions. The notion of imposing intelligence accountability without intelligent counting -- without numbers -- is a fool's errand.

An Odds Table: CIA analyst Sherman Kent's system for assigning numerical odds to specific phrases they used in their intelligence estimates.
Outlook
The Post's opinion and commentary section runs every Sunday.

Outlook Section


World-class investment banks, insurance companies and public health practitioners are increasingly bringing greater quantitative sophistication to their risk analyses. For reasons having chiefly to do with custom, culture and practice -- not competence or cost -- the CIA, Defense Intelligence Agency, FBI and the federal government's other analytic agencies have shied away from simple mathematical tools that would let them better weigh conflicting evidence and data. That bureaucratic shortsightedness undermines our ability to even see the dots, let alone connect them.

Consider the National Intelligence Estimates, the Presidential Daily Briefings or many of the critical classified and unclassified analyses flowing through Washington's national security establishment. Key estimates and analytic insights rarely come with explicit probabilities attached. The nation's most knowledgeable experts on the Middle East, counterterrorism, nuclear proliferation, etc., are seldom asked to quantify, in writing, precisely how much confidence they have in their evidence or their conclusions. Your personal financial planner does a better job, on average, of quantitative risk assessment for your investments than the typical intelligence analyst does for our national security.

For example, when the State Department's Bureau of Intelligence and Research "predicted" terrorism and insurgency in the wake of the invasion of Iraq, its forecasts avoided explicit probabilities. But precisely how confident were the bureau's experts in their assessments of the breadth and intensity of the projected opposition? Did they believe that there was a 60 percent or a 40 percent chance that Sunni Triangle violence would spread north? Did they foresee a 20 percent or a 75 percent chance that car bombings of Shiite mosques would provoke widespread retaliation against Sunnis?

The cultural bias against numbers deprives congressional and White House decision-makers of essential metrics to weigh the analytical community's own credibility. Naming a national intelligence director doesn't change that.

More than 40 years ago, Sherman Kent -- the godfather of the vital National Intelligence Estimates and the man for whom the CIA's analyst school is named -- penned a classified memo attempting to describe how vague words like "probable" and "serious probability" could be translated into meaningful numbers. His "Words of Estimative Probability" proved a rhetorically awkward and ultimately futile exercise in encouraging more disciplined discussions of probability in the analytic community.

Passive-aggressive organizational resistance to quantitative rigor continues to this day. Former acting CIA director and longtime analyst John McLaughlin tried to promote greater internal efforts at assigning probabilities to intelligence assessments during the 1990s, but they never took. Intelligence analysts "would rather use words than numbers to describe how confident we are in our analysis," a senior CIA officer who's served for more than 20 years told me. Moreover, "most consumers of intelligence aren't particularly sophisticated when it comes to probabilistic analysis. They like words and pictures, too. My experience is that [they] prefer briefings that don't center on numerical calculation. That's not to say we can't do it, but there's really not that much demand for it."

That doesn't mean it shouldn't happen. Fortunately, there's no need for a dramatic revolution; subtler measures will do. Here's a suggestion: The simplest, easiest, cheapest and most powerful way to transform the quality of intelligence would be to insist that analysts attach two little numbers to every report they file.

The first number would state their confidence in the quality of the evidence they've used for their analysis: 0.1 would be the lowest level of personal/professional confidence; 1.0 would be -- former CIA director George Tenet should pardon the expression -- a "slam dunk," an absolute certainty.

The second number would represent the analyst's own confidence in his or her conclusions. Is the analyst 0.5 -- the "courage of a coin toss" confident -- or a bolder 0.75 confident in his or her analysis? Or is the evidence and environment so befogged with uncertainty that the best analysts can offer the National Security Council is a 0.3 level of confidence?

These two little numbers would provoke intelligence analysts and intelligence consumers alike to think extra hard about analytical quality, creativity and accountability. Policymakers could swiftly determine where their analysts had both the greatest -- and the least -- confidence in their data and conclusions. Decision-makers could quickly assess where "high confidence" interpretations were based on "low-confidence" evidence and vice versa. That's important information for decision-makers to have. Then their ability to push, prod and poke the intelligence community would be firmly grounded in their own perception of the strength and weakness of the work coming out of it.

Seeing the areas in which top analysts consistently rate their confidence in evidence below a 0.5 might evoke new thinking from the covert operations and "sigint" crowds in Langley and Fort Meade as to what data they should be procuring. More significantly, these two numbers would build a record -- an ongoing audit trail of probabilities and odds -- to revisit and review.

Pushing analysts to weight their intelligence assessments creates a less ambiguous standard of accountability, which might explain why the consensus in the analytical community is to avoid disclosing their odds. But House and Senate intelligence committees seeking greater accountability and better quality from the newly reorganized intelligence bureaucracies should insist that analyses brought in for congressional review -- classified or not, publicly disclosed or not -- include confidence rankings.

Yes, analysts and their agencies will attempt to "game" the numbers. Yes, policymakers will apply political pressure for analysts and agencies to alter their declared odds. All risk-assessment methods are corruptible. But these mechanisms can self-correct. Too many risk analyses that hover uselessly below 0.5 or provide too few assessments of 0.7 and higher that later turn out to be accurate tend to create their own pressures for fundamental change. Better accountability promotes better analysis. And better analysis comes from the explicit explanations and conversations around probability and risk.

But even greater analytical accountability isn't good enough. A growing number of fields ranging from medical diagnostics to Internet spam filtering, for example, increasingly rely upon Bayesian analysis -- a probability theory that predicts the likelihood of future events based on knowledge of prior events -- as a powerful tool to weigh new evidence. Bruce Blair, director of the non-partisan Center for Defense Information, argues convincingly on the CDI Web site that Bayesian analysis goes a long way toward explaining the seemingly flawed risk assessments made by the intelligence community during the run-up to 9/11 and the Iraq war. Nonetheless, although the CIA is familiar with Bayesian analysis and its computational cousins, these techniques haven't seeped into the national security community's analytical mainstream.

Medical doctors and Wall Street traders today do a better job of challenging themselves to explore the growing diversity of analytic options. Even Major League Baseball teams, as Michael Lewis documents in his best-selling book "Moneyball," are grasping that data-driven analyses can lead to better talent acquisition and management decisions. Why should professional baseball executives be doing more innovative statistical analyses than professional intelligence analysts?

Mathematics is not a substitute for judgment, nor do equations define analysis. But analyzing risk without probabilities is akin to discussing art without colors. You can do it, but don't be surprised at the sterility of the results. Unfortunately, with evidence I'd weight at 0.8 and a conclusion to which I'd assign a confidence rating of 1.0, I predict that until the intelligence community overcomes its reluctance to go the probability route, it will continue to compromise its ability to adequately assess national security risks and threats.

Author's e-mail: schrage@media.mit.edu

Michael Schrage is a senior adviser to MIT's Security Studies program. He has participated in non-classified CIA workshops on intelligence analysis.


© 2005 The Washington Post Company