Suppose you are at a playoff game at RFK. Every seat is filled, and Joe Theismann is leading his team on a touchdown drive. Two minutes left, 'Skins trail 21-16, third and eight. There's the snap, Theismann's back . . . back -- and a fully loaded Jumbo jet crashes into the stadium roof.
Okay, maybe you're not a football fan. Then suppose, one night when you're sleeping, a freight train carrying explosives, nerve gas or nuclear fuel derails near your house, spreading death and destruction over a 20-mile radius.
Maybe you don't live near the tracks. Okay. Suppose you are attacked by a moose.
But seriously, folks. William D. Rowe had thought long and hard about the first two possibilities when he unexpectedly came face to face with the third.
Rowe, who among other things is a six-mile-a-day jogger, was working out in Stockholm's Hage Park one morning last year when -- you guessed it. He survived the encounter and afterwards was intrigued enough to get the figures on collisions with moose. These showed that car-moose collisions were the single largest cause of auto accidents deaths in Sweden. Another hazard identified. That satisfied Rowe, who is director of the Institute of Risk Analysis at American University. Danger, you might say, is his business.
Rowe, a former official of the Environmental Protection Agency, and his institute, which was born last January, are visible symbols of what may be the newest profession spawned by the post-industrial age. Technological civilization has made death bets with a number of things -- liquefied natural gas, nuclear power, reentering space vehicles, carcinogenic and toxic chemicals, low-level radiation, commercial aviation and artifcial sweeteners, to name only a few. Risk assessors tell us what our chances are; in the race between innovation and catastrophe, risk assessors are death's bookies.
Insurance companies have been doing something similar, of course, since the dawn of the capitalist era. What makes Rowe and his colleagues in the burgeoning "risk community" different is that they are laying odds on events like reactor meltdowns, Skylab disasters and epidemics of environmentally caused cancer or manmade disease. None of these things have ever happened -- so there are no statistics to study. Risk assessors handicap the horses of the apocalypse with no statistical track record. And many are potentially so awful that the general public, when it thinks about them at all, considers even one occurrence unacceptable.
Risk assessors are begining to play an important role in the technological decisions that are shaping our society. Their services are much in demand by regulatory agencies here in Washington -- and by industry groups that are trying to influence those agencies.
A good death bookie must know a dizzying blend of academic disciplines -- actuarial science, probability and statistics, geography, epidemoilogy, reliability engineering and abstruse mathematical techniques like game theory and decision analysis.
Using these techniques, the risk assessor looks at a specific disaster and -- just like a bookie at the track -- assigns numerical odds against it. These numbers are the core of risk assessment. "It's easy to talk about risk in a quaint way, okay?" says Rowe. "But as far as I'm concerned, that's philosophy. Until you can put numbers to it, it's speculation."
Sometimes a risk assesor is content to announce the odds and leave it at that. But often, he or she goes on to balance the odds against the possible benefits of a risky activity. At this point, the art becomes a crude pseudopod of mathematical economists, and is called "risk-benefit analysis." One of the key questions in this new area is how to assess a dollar value to a human life lost; it's a problem some assessors claim to have solved.
Rowe, a balding 49-year-old, has by turns been a solid-state physicist, a systems engineer, an environmental systems analyst and a government regulator. In one early job he studied the reliability of missile launching controls. "An inadvertent launch will spoil your day," says a sign on his wall at AU's McCabe Hall.
As deputy director of radiation programs for EPA, Rowe stumbled across his rendezvous with risk analysis. Shortly after going to the agency, he remembers meeting with Mit engineer Norman Rasmussen to discuss a project Rasmussen had taken on for the Atomic Energy Commission. The two men had once been neighborts in Sudbury, Mass. Now Rasmussen was beginning a mammoth study of reactor safety.
Rowe says he was struck by the lack of a generally accepted definition of the word "risk," and he decided to work up a one- or two- page document he could mimeograph for use at the office. "Fifty pages later I still haven't cracked it," he says. Eventually, the work turned into a book, An Anatomy of Risk, which was published in 1977. After it came out, Rowe says, "I found myself spending more time at risk conferences and risk meetings than at EPA." So he left, and the institute -- which now has nine full-time members from the AU faculty -- was born.
Rowe estimates there are now 50 to 75 academics who have moved from other fields into full-time study of risks. And given the number who use risk-assessment techniques in government, he says, there may be between 2,000 and 3,000 disaster bookies at work full- or part-time. But Rowe denies that he and others like him are the first generation of a new profession. "We're not trying to define a profession as much as to sell products," he says.
And his products, thus far, have sold. Among his clients are the American Industrial Health Council, the Paris-based Organization for Economic Cooperation and Deveopment, the National Science Foundation and the Rockwell-Hanford Corporation.
Much of his work for industry does not involve discovering new risks, but advising his clients how to explain risks that have already been identified. Like many others in the field, Rowe uses a "bench-mark" explanation to put risks in what he considers proper perspective.
The first two examples in this article -- jet hitting stadium, and railroad catastrophe -- are examples of "bench-marks." These are things that could happend, but are unlikely, and that people by and large don't worry about.
Rowe believes we should compare new risks with these benchmarks; if the odds on a new disaster are less than on one of the benchmarks, then it too is probably something we don't need to worry about.
Building on this idea, Rowe has developed his own definition of that ominous term, "acceptable risk."
"A risk is acceptable," he says, "when those involved are no longer apprehensive about it."
Though it seems idiotic at first, Rowe's definition is actually quite sophisticated. It allows for human psyshology, which accepts some dangers -- the very real hazards of driving to the beach, for example -- quite blithely, while stubbornly worrying about more distant or less likely risks, such as nuclear meltdown or cancer.
Risk assessors study the psychology of risk -- but their own psychology is at least as interesting. Nietzsche warned against gazing into the abyss too long, lest the abyss gaze back. But most risk assessors seem to be a fairly relaxed bunch -- as if, having studied the ways in which we can be hustled out of this life, they have finally come to terms with the idea that none of use will make it out alive.
Rowe himself remembers one of his first experiences with risk psychology. His father, he said, was a hasty, impatient driver, who habitually roared down the road with one foot on the accelerator and one hand on the horn. But one day he decided that his way of driving was too dangerous and told the family he would slow down. The next day he was driving behind a slow, cautious driver -- the very sort he had decided to become himself. As they neared an underpass, a truck jacknifed on the road above and fell on the mellow motorist, crushing him to death. Rowe's father decided to go back to driving like a madman.
Another risk analyst, Roger Kaperson of Clark University, said that studying risk had made him very worried about where he lets his children bicycle. But he also recalled that a visiting Swedish risk analyst indignantly rejected a serving of charcoal-broiled steak which Kasperson offered him, and cited the risk of cancer of charcoal-broiling.
Like much else in this young but complex and quarrelsome field, the definition of "acceptable risk" is the subject of intense debate. The spectrum runs from the "number- crunchers," who believe that probability figures should be the primary index of what risks are acceptable, to "common-law risk" advocates, who think society will have to make decisions on a case-by-case basis of then find the theory afterwards by looking at what it has done, to social psychologists who are using experiments and questionnaries to try to determine what people really think of risk -- and then build a definition on that.
But whatever definition is used, risks are being accepted or rejected every day in Washington. And some parts of the federal regulatory establishment have taken to mathematical risk assessment the way tourists take to slot machines. They are increasingly using the techniques to decide what you and I can safely eat, wear, ride in, spray on our hollyhocks and use to run our electric toothbrushes. If it is reliable, risk assessment many mean better health and a stronger economy for us all; if not, there are measurable odds that risk assessment may affect the way you die. One example is a risk assessment prepared by McDonnell-Douglas Aircraft Corp. on the likelihood of a DC-10 accident like the one that killed 273 people in Chicago last May. Less than one chance in a billion, the company concluded.
"It's not a fad," Rowe says loyally of his field. "But it has some elements of faddism about it."
Risk assessment techniques have been used to tackle the most mundane problems -- and the most farfetched. The Defense Department uses them to decide what the danger is that nuclear weapons will go off in their silos -- or that they won't go off it they are launched against the Soviet Union. The State Department has used them to look at the dangers of killing off whales, and of building a new canal across the Panamanian isthmus. An adviser to the Canadian Atomic Energy Board computed figures for what he said were the risks of solar energy and wind-generated power -- nuclear power is safer, he decided. And one enterprising researcher in Massachusetts has begun a study into the total risks of television -- not only dangers from low-level radiation, but the risks of what he calls "toxic information" floating through our mental environment, leading us to sit more, read less, buy needless products and commit random acts of violence.
Closer to home, the Environmental Protection Agency has no fewer than three offices preparing mathematical risk assessments -- one to assess all carcinogens, another to look at pesticides and a third looking at the effects of low-level radiation. The Food and Drug Administration has moved more slowly in the area, but at the urgin of recently departed Commissioner Donald Kennedy, staffers began using risk assessments two years ago to help regulate such things as toxic chemicals in fish and animals sold for food.
In fact, EPA, FDA, the Consumer Product Safety Commission and the Occupational Safety and Health Administration produced a joint report on the use of risk assessment to evaluate cancer-causing chemicals. The report endorsed the techniques; but it also said that "current methodologies, which permit only crude estimates of human risk, are designed to avoid understatement of the risk. It must be recognized, however, that there may be circumstances in which this cannot be guaranteed . . . Because of this, risk assessment should be used with caution in the regulatory process."
At the same time this caveat was being issued, the White House office of science and technology policy was urging the agencies to step up their risk studies. That memorandum proposed that a central office of cancer risk assessment be set up as part of the National Toxicology Program in the Department of Health, Education and Welfare.
The growing vogue for risk assessment alarmed Arthur C. Upton, director of the National Cancer Institute. In April he wrote a memorandum to the FDA's Kennedy warning that risk assessment is still too primitive to be used as a "primary basis" for regulatory decisions.
As an example, he cited the various mathematical techniques used to assess the cancer risk of saccharin. There is no question that saccharin causes cancer to some extent. The problem lies in translating results of animal studies -- which involve high doses of the sweetener -- into figures that apply to the ordinary person who drinks one diet drink a day. The analyst can choose one of 10 mathematical techniques to project the risk, Upton wrote. He might, for example, assume that the entire present U.S. population would drink one diet soft drink a day for the rest of their lives. Depending on which mathematical technique he chose, he might project that the result would be a mere one-fifth of one case of cancer over the next 70 years -- or a terrifying 1,144,000 cases in the same period of time.
All risk assessment based on animal studies has the same potential for error. And "each such error could be a catastrophe," Upton concluded.
Upton is not the only skeptic about the worth of mathematical risk assessment in regulating chemical dangers. Although it was a party to the joint cancer assessment report, OSHA has strenuously resisted making the techniques a central part of its regulation of chemical hazards in industrial workplaces. In February last year the agency issued a stringent new rule limiting the handling of benzene in industry. The odorless colorless chemical -- used for making plastics, disinfectants and pharmaceuticals -- has long been recognized as a cause of leukemia. EPA's carcinogen assessment group, for example, concluded that use of benzene at current levels would produce about 90 leukemia deaths per year in the general population.
OSHA's new standard was based on what it called the "lowest feasible level" of worker exposure, a measure based on available technology rather than on a numerical study of risk.
But the American Petroleum Institute recruited some of the biggest names in the risk field -- physicist Richard Wislon and economists Richard Zeckhauser of Harvard and Lester Lave of the Brookings Institution, among others -- to convince a federal appeals court that the new rules should be blocked until the agency completed a formal analysis of the risks and the costs involved in the change. The case in pending before the Supreme Court, and the fate of many past and future OSHA standards is riding on whether the court chooses to write risk analysis into the law books. If it does, risk assessors will gain even more influence over the way you and I live. Even though some people insist the discipline is not yet developed enough to take so much responsibility, gun-shy regulators will be reluctant to make any move without a formal risk assessment.
"It's more of a philosophical issue than a substantive issue," said Grover Wrenn, director of health standards at OSHA. The philosophical issue is "to what extent do you base your decision on the relationship of costs and benefits, and we say that we don't . . . We think we can't in a publicly responsible way. It requires the Secretary of Labor to put an explict dollar value on human life."
API's lawyer declined to comment on the case, but Harvard's Wilson says the issue is whether regulatory standards should be assessed by the best means possible, or left to the whims of regulators. "They [OSHA] wouldn't define what they mean by 'as safe as possible,'" he said. "As safe as possible means what the secretary said a week ago."
Interestingly, the National Aeronautics and Space Administration (NASA), the king of high-technology agencies, is also not convinced that risk assessment is useful. During the Apollo moon program, NASA ran a full-scale risk assessment to find the odds on putting a crew on the moon and bringing them back alive. According to Haggai Cohen, director of reliability and safety for space transportation systems, the odds were between five and 10 to one against it. The rest is history: Neil Armstrong walked on the moon, and NASA gave up risk assessment.
"We have really abandoned completely numerical risk assessment and are instead working on engineering methodology to identify and eliminate risk," Dohen said recently. "An acceptable risk is one that we have looked at and determined that to attempt to eliminate it would cost so much in weight or dollars that we are determined to live with it."
But NASA did use risk assessment in making a decision that could have had major impact, in the literal sense, on the public -- the fall of Skylab. Before the massive space station was launched in 1973, the agency calculated the odds that pieces of it would hit people on earth when it fell back into the atmosphere. The risk was "acceptable," according to NASA's figures, so the agency decided not to install a retrorocket system that could have brought Skylab down safely, but would have increased the weight and the risk to those aboard.
In the last few years, when new leadership at the agency was making a frantic attempt to prevent the fall of Skylab, another risk study concluded the chance of injury to humans from Skylab debris was 151 to 1. Looking at the odds of trying to rescue it NASA decided to stop trying to keep Skylab up.
That wager paid off in July, when the 78.5-ton spacecraft broke up into flaming chunks over the remote Australian desert, giving the people of Perth a chilling aerial fireworks display but hurting nobody. A NASA post-mortem has tentatively concluded that because Skylab broke up at a lower altitude than expected, the risk assessment, if anything, overstated the risk -- the Skylab death bet was a sound wager rather than a long-shot that happened to come home.
The Skylab decision, in risk assessment parlance, raises questions of "risk equity." The people who were subjected to the "acceptable risk" were not a few trained volunteers, consciously accepting danger for a chance at high adventure and a spot in the history books, but 90 percent of the world's population, many of whom had never heard of the giant satellite until it almost fell on their heads. The decision to let it fall was made by a few technicians, with no public review or participation. An "inequitable risk" is one in which those who bear the danger do not reap the benefits, and Skylab was very possibly such a case.
Skylab was also the most spectacular example to date of a risk accepted and escaped -- writing its message in fiery letters across the southern constellations. But the most prolonged and bitter controversy in the field has been about the Nuclear Regulatory Commission's Reactor Safety Study -- a 1975 document known as WASH-1400, or the "Rasmussen Report."
Massachusetts Institute of Technology nuclear engineer Norman Rasmussen worked for two years with the NRC research staff on the study. They looked at two working reactors and constructed computer models which estimated the risk of a core meltdown and release of radioactivity, and the likely results if one happened.
There has never been a reactor meltdown; so the probability figures given in the report represent Rasmussen's assessment of how things might go wrong in a reactor, what might be done to put them right and how likely each sequence of events is. With 100 reactors in operation (there are now 71), the study concluded, the odds against a meltdown were 200 to 1 per year. It also reported that even if a meltdown occurred, it would probably cause one death or fewer, and the odds against any individual dying in a reactor accident were 5 billion to 1.
In a recent interview, Rasmussen insisted that the study was intended as a learning device, to show the NRC where it should concentrate its safety research and prevention efforts. But industry groups and pro-nuclear politicians seized on the risk figures to provide the final justification of America's commitment to nuclear power. However it was intended, WASH-1400 has a back track record as a predictor of how real reactors and crews -- as opposed to computer models -- will react. An NRC review panel concluded that the report paid too little attention to fires of the type which led to a serious accident at Brown's Ferry, Ala., in 1975. And WASH-1400 concluded that what later happened at Three Mile Island -- the most serious accident in the history of American commercial nuclear power -- was so unlikely that its probability did not even contribute to the comforting risk figures given.
How did WASH-1400 underestimate Thre Mile Island? Rasmussen did not have the time or money to design computer models of all commercial reactors, so he stuided only one of each type. The pressurized water reactor he studied was built by Westinghouse. The Three Mile Island reactor was a different design, built by Babcock and Wilcox.
The study had looked at a similar malfunction of their computerized reactor and concluded that the odds against it were 100,000,000 to 1. But postmortem NRC calculations show the odds in a real Babcock and Wilcox reactor were only 16 to 1.
"You have got to differentiate between risk assessment and reality," explains Saul Levine, cheif of nuclear regulatory research at the NRC, who served as staff director for WASH-1400.
Even before Three Mile Island, the NRC had withdrawn its endorsement of the Rasmussen report. Persistent questions by nuclear critics and congressional staffers had led the agency to convene a scientific panel to review the study.Last October the panel concluded that WASH-1400 had "an inadequate data base, a poor statistical treatment and inconsistent propagation of uncertainties throughout the calculation, etc."
However, the panel praised the simulation method, and Levine said it is now being used at NRC to tackle other safety problems. The attention given the Rasmussen report had left the public confused and skeptical of the whole notion of risk assessment. But Levine is not overly concerned about public confidence in risk assessment.
"I don't give a damn about the public in that regard," he said evenly. "I regard WASH-1400 as a technical study of landmark nature . . . I think this is just too complicated for the public to understand the question of these methodologies. In 10 years this is going to be a commonly accepted method, and the public's going to accept it. . . They can't get involved in every tecnological aspect of our society."
But as we have seen, not everyone shares Levine's serene confidence that mathematical risk assessment is a useful tool for prediciting events that have not happend before. NASA, in fact, advised NRC not to do the Rasmussen study before it was begun.
Even those who do believe in the future of risk assessment often disagree with the idea that these matters can be left to experts. William Rowe, for example, served on the review panel which criticized WASH-1400.
Rowe thinks the "uncertainity bounds" on nuclear power -- the limits on how precisely we can predict its safety -- cross over into the area which society considers an unacceptable risk. "What's needed is a political decision," he says.
The troubled career of WASH-1400, he adds, is "not a condemnation of the method -- it's a condemnation of the fact that people expect the method always to work."
Rowe is confident that mathematical techniques will be used more and more in making technological decisions. But once the numbers have been assigned, he says, the public must be involved in the decision of what is acceptable risk and what is not. "The public process," he says, "is more important than the result."
And that's the core of the question risk assessment poses for a society which -- like it or not -- grows more dependent on technology in one form or other every day. Risk assessment is a growth industry -- a field that any bright academic or bureaucrat can ride upward throughout the 1980s. But as death's bookies set up their betting shops, they may possibly be a means of allowing the public to take a more knowledgeable part in decisions looming on energy, biology and pollution. In the past, risk assessments have been done by scientists with a career commitment to the hazards they studied. But Bill Rowe is part of a wave of pure risk freaks -- death bookies who do not necessarily work for the track. They may be so fascinated by catastrophe and hazard that they will not downplay a good risk when they find one.
The danger, of course, is that we may not be willing to listen. Many people do not like to think about the dagers they face, from smoking or driving or drinking diet pop. Saul Levine has a sign on his wall at NRC that speaks for them: "Sometimes it is better to curse the darkness."
Many people may look to risk assessors for words of comfort and assurance, for a promise that new technologies are nothing to worry about. And industry lobbies will be glad to studies that provide the reassurance.
But if the public buys false comfort, and something goes wrong -- if, one day, a reactor or a chemical or an LNG tanker produces a major catastrophe the reaction is likely to be outrage, a feeling that we have been deceived.
Rowe and others like him want to head off that danger by making sure that the public has a voice in making risk decisions, so that the dangers become, in a way, voluntary risks. They are like the soldiers in Tim O'Brien's Going After Cacciato , who decided to frag their lieutenant -- and who insist that every member of the platoon touch the grenade, and share the guilt, before it is thrown.
In this case, the aim is a worthy one. But there is a great deal of difference between letting people touch the grenade after a full discussion and a free choice, and tricking them into touching it by numbering them with meaningless or confusing comparisons. Harvard's Richard Wilson, known in the risk community as a premier "number cruncher," takes great glee, for example, in telling people that peanut butter is "20,000 times more dangerous than saccharin." The peril in the peaceful peanut, he says, is aflatoxin B-1, a carcinogenic mold which grows naturally on dried peanuts and other grains and then passes into the butter when it is ground.
Maybe so; but as Wilson himself Admits, "peanut butter is not the issue." The issue is whether technicians will present their findings in comprehensible form.
For example, the comparison between aflatoxin and saccharin is not a very useful one. Aflatoxin is a natural hazard, one which human beings have been facing since the beginning of agriculture. Better techniques of storing and handling grain can reduce the risk, though so far we cannot eliminate it altogether. Saccharin is a brand-new risk, created by industrial society -- and one which society can chose to take or not. And the marvelous precision of Wilson's comparsion does not let us in on what technique he is using to estimate the risk of saccharin -- whether he is talking about 12,000 excess cancers or 22.8 billion.
The ordinary person, placing his first bet at the catastrophe window, needs to remember some common-sense principles. For one thing, it seems absurd to call a risk "acceptable" if a fairly small expense can reduce it; it is only when risks can't be reduced easily that we should begin to think of accepting them. Voluntary risk, such as smoking, driving a car, or eating peanut butter, is fundamentally different from involuntary risk, such as transporting hazardous chemicals through a populated area. Risk-benefit analysis makes no sense unles the "equity issue" -- who takes the risk and who gets the benefits -- is fully explained. Risk assessment has large uncertainties. To the ordinary person, the difference between one-fifth of a cancer and 1.1 million is the difference between normal life and disaster. And most important, risk assessment as a tool is supposed to open doors -- to offer new chances to increase safety -- not to close down discussion.
If risk assessors can be held to these principles, their art may be a useful tool; if not, it may degenerate into a pseudoscience, as useless as the Roman custom of cutting open a chicken to decide whether to fight a battle.
Certainly there are some risk assessors who feel that the public, far from being a gaggle of technological ninnyhammers, may have more gumption than the specialists. One such is Roger Kasperson, the 41-year old geographer at the Institute for Hazard Analysis at Clark University in Worcester, Mass.
"We think that the public perception of risk is not irrational and erratic," he says. "People tend to think of classes of risk in the same way. And people have a right to bear what kind of risks they want whether it makes sense to scientists or not. If people want to kill themselves smoking cigarettes and driving cars, that may not make sense to a scientist. But if people are worried and fearful about nuclear power, nuclear power is going to have to be safer.