New Jersey had a problem. A cancer-causing, radioactive substance was widespread in homes throughout the state, threatening the health of thousands of residents. State officials worried that publicity about the hazard could cause a panic.

But just the opposite has happened.

Two years after announcing the health problem, officials are confronting a surprisingly different problem -- apathy. Many New Jersey residents have ignored a major risk that is literally in their own back yards: radon gas that occurs naturally in uranium-containing rocks and soils and seeps into basements. Scientists estimate that one in three houses in northern New Jersey is contaminated with this radioactive material.

Yet public hysteria did erupt when state officials tried to dispose of dirt containing low levels of radon-emitting industrial waste from an old luminescent paint factory. Government officials saw little danger from this radon and planned to mix the contaminated soil with regular dirt and dump it in an abandoned quarry near rural Vernon, N.J. But this time, there were threats of civil disobedience, and angry citizens successfully blocked the action.

Why did the public ignore government warnings about the danger of high levels of natural radon while panicking about what state experts saw as the insignificant threat of low-level radioactive waste?

Understanding why experts and the public fail so frequently to agree on the nature of health risks is of growing concern not only to government officials and industry executives but to academic researchers and environmental activists, as well.

Efforts to bridge the gap -- studying how the public perceives risk and how best to explain it -- have created a burgeoning new field called "risk communication."

"The core of the problem is that the risks that kill people are often not the same as the risks that frighten and anger people," said Peter Sandman, head of the Rutgers University environmental communication research program here. "Risk for the experts means how many people will die, but risk for the public means that plus a great deal more. Is it fair or unfair? Is it voluntary or coerced? Is it familiar or high-tech and exotic?"

When it comes to the perception of risk, radon in rocks is not the same as radon in industrial wastes.

"Geological radon has no villain. It's God's radon. It strikes people in their homes, traditionally safe turf," said Sandman. "The landfill radon problem was very different. There was a readily identified villain, which the community felt was unfairly imposing a risk without even telling it, much less asking its permission." ::

On July 1, new public right-to-know amendments to the federal Superfund law go into effect. The legislation orders that detailed information be made available on routine chemical emissions from plants around the country.

"During the next two years, a tidal wave of new information on hazardous chemicals will wash over thousands of communities across the United States," warned Charles L. Elkins, director of EPA's Office of Toxic Substances, in a recent EPA Journal. The "wave" will consist of reports to the public on the amount of hazardous chemicals released to water, soil and air of those communities.

"How prepared are America's communities to receive, understand and act on this unprecedented deluge of information about hazardous chemicals?" asked Elkins. ". . . The answer, unfortunately, seems to be: Not very."

A key question is how the chemical industry will handle it. In many ways, the slowdown of the U.S. nuclear industry is an example of poor communication about risk. Nuclear plant officials sought to reassure the public by stating that a nuclear accident was impossible. Then came the 1979 accident at Pennsylvania's Three Mile Island, which in the end turned out to be more of a public relations disaster than a health problem.

With the experiences of the nuclear industry in mind, many chemical industry leaders "understand the consequences if they do not take {risk communication} seriously," said Vincent Covello, who recently left the National Science Foundation's risk assessment program to head a new Columbia University risk-communications center.

The Chemical Manufacturers Association, a trade group, is conducting seminars for plant managers in preparation for the new right-to-know rules, known formally as the Emergency Planning and Community Right-to-Know Act, Title III of the Superfund Amendments and Reauthorization Act of 1986. Included is a new manual by outside experts that cautions: "Risk communication, when properly done, is always better than stonewalling."

Several states also have their own right-to-know laws. California is attempting to implement Proposition 65, a comprehensive new law requiring warning labels on all products containing chemicals that pose significant risks of cancer or reproductive problems.

As these laws become more common, there are worries that some companies will dump an abundance of information on communities in hopes that the public will throw up its hands and ignore most of the material -- and the potential risk. Danger Plus Outrage

There's no question different groups of people view the same risks differently. Baruch Fischhoff, who recently joined the risk center at Carnegie-Mellon University in Pittsburgh, is one of a small group of researchers who over the past decade have explored what factors contribute to the public's perception of risk. Formerly with a private firm, Decision Research in Eugene, Ore., Fischhoff and colleagues Paul Slovic and Sarah Lichtenstein compared perceived risk for 30 activities and technologies among four groups.

Nuclear power, for example, was ranked riskiest by the League of Women Voters and college student participants, while it ranked eighth among business executives and No. 20 by technical experts. At the same time, all four groups ranked motor vehicles, handguns and smoking in their top five.

A later study on more than 80 hazards found that the more dreaded or unknown the risk, the more the public wanted to see the risk reduced and subjected to strict regulation. The most dreaded risks -- nuclear weapons and nuclear power scored highest -- were characterized by perceived lack of control, catastrophic potential, fatal consequences and inequitable risks and benefits. Unknown hazards -- chemical technologies scored highest here -- were generally unobservable and new, with the potential harmful effects delayed in time.

More-recent research has identified about two dozen factors that are key in how the public evaluates a potential hazard. The list includes: Magnitude. People are more concerned about major accidents involving fatalities and injuries at one time, such as airline crashes, than the same number scattered over a longer time period, such as car accidents. They are more concerned about irreversible hazards, such as nuclear war, than reversible ones, such as smoking. Risk to future generations, such as genetic damage, increases concern. Evidence. Concern increases if a risk is poorly understood and scientifically unknown or if the effects are uncertain or delayed, such as in the development of cancer following exposure to low doses of chemicals. Human evidence is more persuasive than animal studies. Personal choice. Voluntary risks are far more acceptable than imposed ones. Smoking began to move from acceptable to unacceptable in public places once there was evidence that sidestream smoke could be harmful to others. Risks under an individual's direct control are less threatening. Drivers may recognize the general risks of an accident but believe that they can avoid one. Not in my back yard. A risk that is closer to home is more upsetting than widespread risks shared by the general public. People are also more likely to question a potential hazard if they derive no benefit from it. Those living next to a nuclear waste site, for example, may not be the ones who originally benefited from the nuclear power. When those at risk also benefit, the risk becomes more acceptable. Publicity. Media attention heightens concern, regardless of the numbers involved. News coverage often focuses on new health risks rather than old but important ones. A single major, well-publicized accident -- like the nuclear reactor accident at Chernobyl in the Soviet Union or the toxic chemical leak in Bhopal, India -- has a permanent psychological impact, and television makes the risk even more real. Concern rises if children are at risk and if media coverage makes the victims identifiable, such as workers trapped in a mine. Source. Manmade hazards are less acceptable than natural ones. There has not been a mass exodus from California because of the threat of earthquakes. People don't hold rallies to fight floods.

As Rutgers' Sandman puts it, risk is defined as "the sum of hazard and outrage. The public pays too little attention to hazard; the experts pay absolutely no attention to outrage. Not surprisingly, they rank risks differently."

Chicken Little Syndrome

The interest in risk communication is propelled, in part, by highly publicized incidents over the past two decades that have fueled public fears about the risks posed by modern technology, from pesticides in ground water to toxic waste dumps.

During this same time, scientific instruments made it possible to detect ever smaller amounts of chemicals in the water or air. Yet there was no consensus within the research community on how to interpret what risk these levels posed. Meanwhile, the number of lawsuits has increased, raising the issue of responsibility for environmental risk questions.

All this has led many risk assessment experts to believe that while the world today is safer than ever before, American society is being changed by a Chicken Little-type mentality. The public, they say, has grown more concerned about risk, less willing to assume it and less trusting of public and private institutions.

"How extraordinary! The richest, longest-lived, best protected, most resourceful civilization, with the highest degree of insight into its own technology, is on its way to becoming the most frightened," lamented political scientist Aaron Wildavsky in a 1979 article on public perception of risk.

More recently, Milton Russell, a former EPA assistant administrator, called for better efforts in risk communication, saying, "Real people are suffering and dying because they don't know when to worry and when to calm down. They don't know when to demand action to reduce risk and when to relax because the health risks are trivial or simply not there."

Yet some environmentalists are highly skeptical of this view. Ellen Silbergeld, a scientist with the Environmental Defense Fund, considers all the talk about improving risk communication merely a "shield for inaction" by regulators and polluters.

"One person's risk communication may be another person's propaganda. It's a tough call," said William W. Lowrance, director of the life science and public policy program at Rockefeller University and author of one of the first books on risk.

"It certainly is an effective legal device. Industry may put money into risk communication to dissolve community fury," said Marcel LaFollette, a Massachusetts Institute of Technology journalism and science policy professor. But, she added, the pressure for better risk communication "came from the grassroots. My sense is that this has bubbled up from below."

Ideally, says Rutgers' Sandman, risk communication includes the "effort to alert people to risks they are not taking seriously enough, the effort to reassure people over risks they are overreacting to" and the effort to open up discussions between different parties in risk controversies.

In practice, he and other academic proponents admit, there is a "bigger market for expertise in how to calm down the public. Some of that market is driven by the sense the public is inappropriately alarmed. Some is driven by the desire to get the public inappropriately reassured. Both are happening."

Meanwhile, there is no question that risk communication has prompted a wide range of measures. A National Academy of Sciences panel on risk perception and communication is expected to issue a report later this year. There are a number of federal studies, including three dozen projects at the Environmental Protection Agency and research at the National Science Foundation. The interagency task force on cancer, heart and lung disease, a 14-agency panel chaired by EPA, has a working group on the role of government in risk communication.

Several new publications have come on the scene, including numerous books and special issues of the Journal of Communication, as well as a quarterly called Science, Technology and Human Values, published jointly by Harvard and the Massachusetts Institute of Technology.

A new center for risk communication has been established at Columbia University with a network of 35 public and private groups. Carnegie-Mellon University received a $1.3 million federal grant in 1987 to set up a risk perception and communication research program. Rutgers' environmental communication program began in 1986. A group at Tufts University is conducting case studies on environmental risk communication. Georgetown University's Institute for Health Policy Analysis has sponsored a half-dozen workshops for scientists and the media on subjects from radon to AIDS to scrutinize the media's role in communicating risk.

Washington-based environmental think-tanks have also gotten into the act. The Conservation Foundation has sponsored symposia and publications on risk communication. Resources for the Future has a new Center for Risk Management.

Overseas, disasters like Chernobyl led the Organization for Economic Cooperation and Development to hold a symposium in Paris on radiation risk communication. The Canadian government held a risk communication meeting in Ottawa last December.

Traditionally, the scientific and technical community has defined risk for the public, and these experts tended to view risk in concrete terms: How many people will die or become ill because of a given activity?

Some frustrated technical experts have taken the view that if the public and policy makers could just be made to understand risk numbers the way they do, more rational management of risk would result. They have sought to make risk more understandable to the public by comparing rare and often little-understood risks of manmade activities like radiation and industrial chemicals to more common risks of everyday life.

"Comparisons can be useful. We are not born with an instinctive feeling for what a risk of 1 in a million per lifetime means," argued Richard Wilson and E.A.C. Crouch of the Energy and Environmental Policy Center of Harvard University.

A controversial study of reactor safety published by the Nuclear Regulatory Commission in 1975 estimated, for example, that an individual's chance per year of dying in a motor vehicle accident was 1 in 4,000, while that of being hit by lightning was 1 in 2 million and of being killed by a nuclear reactor accident, 1 in 5 billion.

Last year, Wilson and Crouch published a new listing of commonplace risks in Science magazine that, roughly translated, estimated that the annual risk of death from smoking one pack of cigarettes a day was 1 in 300; of a motor vehicle accident, 1 in 4,100; of exposure to air pollution in the eastern U.S., 1 in 5,000; of drinking water with the EPA limit of a controversial cancer-causing chemical trichloroethylene, 1 in 500 million.

Sometimes, comparisons have focused on specific areas, such as food. A widely publicized 1987 risk comparison by Dr. Bruce Ames, a University of California biochemist, showed that the cancer-causing potential of chemicals naturally found in foods (such as aflatoxin in peanut butter) may be far more hazardous than industrial chemicals, such as the trace amounts in drinking water.

Risk comparisons tend to make similar points: that the common risks such as driving a car or sitting in the sun often are more dangerous than feared technologies, that natural hazards are often worse than manmade ones and that some of the most controversial new technologies are safer than others already in place.

But critics have attacked some risk comparisons as simplistic and sometimes unscientific. They charge that comparisons often compare apples and oranges. Sometimes the information is selective -- the NRC study came under criticism for calculating only risk from immediate fatalities, not delayed ones. Comparisons also fail to convey the degree of uncertainty involved and generally ignore other factors important to the public in comparing risk, such as the degree of choice involved.

The Environmental Defense Fund's Silbergeld sees risk comparisons as an attempt to "market unacceptable risk."

Columbia's Covello said that "many of these attempts to compare a risk in question with the risk of daily life are a form of attempting to manipulate public opinion."

In contrast to using the numerical or comparative approach, social scientists look for alternate ways to assess risk, focusing on how society and individuals perceive risk. "If you want to communicate with the public, you have to take the public's concerns seriously," said Carnegie-Mellon's Fischhoff, a member of the National Academy panel on risk communication. Reality Gap

National leadership on environmental risk communication is generally credited to William Ruckelshaus upon his return to Washington in 1983 for a second tour of duty as head of the EPA. The agency was in a shambles, suffering from serious problems brought on by early Reagan administration appointees.

Ruckelshaus separated EPA's mandate into three aspects of risk: defining what is hazardous (risk assessment), deciding how to deal with it (risk management) and explaining the process to the concerned public (risk communication).

Ruckelshaus' successor, Lee Thomas, commissioned an internal study by about 75 career EPA officials that ranked 31 environmental problems according to their cancer risk, non-cancer health risks, ecological effects and welfare effects, such as economic damage.

The study, released last year, showed major discrepancies between what the task force experts rated as major risks and the major program priorities at EPA. Instead, EPA's efforts, largely dictated by Congress, seem more closely linked to public opinion.

A recent comparison of the study's results with public polls done by the Roper Organization suggests that the greatest differences in risk perception involve hazardous waste and chemical plant accidents, which were ranked medium to low in the EPA task force's view but high in public opinion.

And while experts ranked acid rain, radon and indoor air pollution, consumer product exposure and global warming (ozone depletion) relatively high in risk, these appeared to be of medium to low public concern.

Hazardous waste is the most dramatic example of how widely risk perception can differ, noted Frederick Allen, associate director of EPA's Office of Policy Analysis. The task force said that in certain areas hazardous waste does pose a very serious risk, but relatively few people live near enough to be directly affected. Nevertheless, there has been a groundswell of national concern about hazardous waste.

"It's a real challenge to try and sort out whether we ought to do what the experts say is the most important or what people say is most important," said Allen. "What's the balance?"

Radon Redux

The real testing ground for risk communication is at the local level. EPA has attempted a variety of risk communication experiments, starting under Ruckleshaus with an arsenic-emitting copper smelter in Tacoma, Wash., in 1983 and more recently in evaluating an array of different risks in California's Silicon Valley, Philadelphia, Baltimore and Denver.

One of the most significant environmental issues is the extent of the newly recognized indoor radon hazard. The agency is funding risk communication research to alert the public to the serious health risks associated with the radioactive gas. In this instance, EPA has no regulatory authority over the naturally occurring substance, so solutions are largely dependent on individual action rather than regulatory efforts.

In New Jersey, a state still suffering from its reputation as a chemical cancer alley, government officials are struggling to put all environmental risk -- both natural and manmade -- in perspective. They have set up a risk communication unit and asked for outside advice.

Rutgers' Billie Jo Hance, Caron Chess and Sandman have just completed for the state's Department of Environmental Protection an 80-page manual called "Improving Dialogue with Communities."

The Rutgers' advice: "Pay as much attention to outrage factors, and to the community's concerns, as to scientific variables. At the same time, don't underestimate the public's ability to understand the science."

In the case of radon, EPA's Ann Fisher said that a variety of federal and state radon risk communication projects had already been undertaken in New York, Maryland, Maine, Florida, Pennsylvania and Colorado, as well as New Jersey.

Radon is considered a major environmental health threat. Federal government estimates suggest that up to 20,000 lung cancer deaths may result each year in the United States from long-term exposure to geological radon.

Yet it requires personal initiative to get one's home tested to see if there is a problem. And then it requires further action -- and money -- to seek professional help in lowering the radon levels in the affected home.

Two 1986 surveys by Rutgers psychologist Neil Weinstein and Sandman for the state found, said Sandman, "that the majority of the people, including those in high-risk areas, were aware of radon, knew what the risk was and were not inclined to do anything about it. We did not find ignorance. We found apathy."

The surveys found unwarranted optimism among individuals that downplayed their own risk from radon. And even among those who did have their homes tested, there was not a strong relationship between how much radon they found and how likely the homeowners were to take remedial action.

"We're looking for ways to explain radon levels more effectively so that those with high levels decide to take action and those with low numbers relax," said Sandman.

The Rutgers researchers also examined different reactions to the discoveries of natural and manmade radon in three communities: Boyertown, Pa., which in December 1984 became the first highly publicized geological radon hot spot in the country; Clinton, N.J., the first major geological radon hot spot in that state in March 1986; and Vernon, N.J., where residents fought the state in mid-1986 to block dumping low levels of radon from industrial waste.

"The real irony in Vernon is that residents protesting the disposal of manmade radon were potentially at higher risk from naturally occurring radon in their homes. It's extremely likely the majority of protesters had not taken action to test for natural radon. So the obvious question is why," said Chess.

Her study suggests, once again, that the public viewed manmade risks differently from natural ones and that the state officials handled the situations very differently.

In Clinton, officials worked closely, and successfully, with town officials and citizens in dealing with natural radon, where they agreed there was a potentially serious health risk.

But in Vernon, because the state viewed the risk of the radon-contaminated landfill as negligible, it did not take the community concerns seriously or involve them in the decision, said Chess. Predictably, "the result was a government nightmare, precipitating a public meeting of about 3,000 people, a rally of 10,000, a demonstration at the governor's mansion and civil disobedience training," she said.

New Jersey is "taking risk communication seriously," Sandman said, but "I also think it's dragging its heels."

All that means the field of risk communication has a long way to go to close the perception gap between the experts and the public. As Sandman said, to listen to the public is "a very difficult transition for a government agency to make."

Cristine Russell is on leave from The Washington Post and is a fellow of the Alicia Patterson Foundation, which supported the research leading to this article.