Americans got their first look last week at death rates in all their hospitals -- and found out that without question there can be a greater chance of dying in some hospitals than in others.

The look was a massive report by the federal Medicare agency on the 735,000 deaths of Medicare patients in 5,971 hospitals during the year 1986, either in the hospital or 30 days after discharge.

It is a revolutionary first step in giving consumers an objective measure of the quality of care in individual hospitals. "We've finally cracked the reticence, the guild protective mentality" that has kept such information from the public in all past history, said Jack Christy, analyst at the American Association of Retired Persons' Public Policy Institute.

The report showed that 146 hospitals, just under 2.5 percent, had overall death rates that were above their expected range. Another 10 percent had death rates at the high end of the spectrum. Altogether, this means that 12 percent -- or more than one hospital in 10 -- have death rates that raise questions for both consumers and health officials about the quality of care.

Both the authors and critics of the report agreed that the survey has serious defects. Most crucially, it fails to take enough account of the severity of illness of each hospital's patients. This can make vast differences in deaths in two, otherwise similar, hospitals. The statisticians managed to measure this major element only in part.

This means one cannot say that a particular hospital is "good" or "bad" or "better" or "worse" than others on the basis of these numbers alone even though it is obvious from the statistics that there are differences.

Of what use, then, are such numbers in telling us, the patients, where we'll be safest?

Just this: to ask questions, to press our doctors to tell us more about any hospital they recommend and, if a specific hospital had unfavorable-sounding death rates, to tell us why.

That's the first step for consumers, says Dr. Robert Brook of the Rand Corp. and the University of California at Los Angeles, one of the federal project's main advisers. But "there's more to it," he adds. "There ought to be something substantial that comes back in response."

For example, if I ask my doctor, "Why did this hospital have such high death rates?" and he or she answers -- "I've read our quality assessment committee's minutes, and they did find a problem that we've now worked on" or "There's a good reason for the high rates -- this hospital has the area's busiest emergency room and it just gets more patients who are going to die" -- I may at least feel that the doctor has given the subject some serious thought.

But if the doctor "just totally stonewalls," in Brook's words, or just says, "Oh, everything's fine -- stop worrying," then I might worry.

In that case, adds Brook, "if I knew nothing else, if I just had to go by these numbers, I would probably be better off" going to a hospital with low death rates -- say, rates in the lowest quarter of its expected range -- than one with deaths in the highest quarter. And "maybe half, maybe 40 percent" of hospitals with suspiciously high death rates, he estimates, indeed have something wrong.

Perhaps some poor doctors. Or harried nurses. Or a poor plant with outdated diagnostic equipment. Or an inferior laboratory or intensive care unit or emergency room. That is, "some flaws" that mean "it is likely to show up badly," says Dr. Henry Krakauer, medical officer at the Health Care Financing Agency (HCFA), the Medicare agency.

Of all hospitals, how many have death rates -- either overall death rates or rates in one of several categories studied -- that should at least prompt questions or scrutiny?

Those within 10 to 20 percent of the top limit of what's expected of them, according to some examples Krakauer cites. Those in the top 20 percent of their expected range, says Dr. Sidney Wolfe, head of the Public Citizen Health Research Group, which began pressing for release of such information 15 years ago. Those in the top quarter, Brook suggests.

On these pages we list the Washington area hospitals whose mortality rates as reported by HCFA were either "high" (many deaths) or "low" (few deaths), that is, either outside their expected range or in its top (or bottom) 20 percent.

Florence Nightingale wrote in 1858 that "accurate hospital statistics are rare." It is only in the past few years that they are becoming less so.

The process has been spurred by the government's creation of a nationwide network of state or area PROs -- professional review organizations -- to monitor the care of Medicare patients. In 1973 Wolfe's Health Research Group began a series of petitions and legal actions for public release of such data. They were largely unsuccessful. "We met every kind of resistance you can imagine to the public's right to know," Wolfe says. But his efforts helped create the atmosphere in which a 1981 study group of the prestigious Institute of Medicine, a branch of the National Academy of Sciences, endorsed public disclosure of hospital data to "enhance consumer choice" and medical institutions' "public accountability."

A preliminary federal hospital mortality report which listed only 142 hospitals came out in March 1986. HCFA had not intended to release these figures but hastily did when its legal counsel said it must in the face of an expected Freedom of Information request from The New York Times.

"Our legal counsel told us, 'It's an open and shut case. You will have to release the information,' " says one HCFA official. "So we decided to put the best face on it and release it."

The release caused screams of anguish from hospitals and doctors, who claimed -- correctly -- that the information had been hastily assembled, with no checkbacks with hospitals for accuracy.

In May 1986, Dr. William Roper, a public health officer with a strong belief in medical accountability, became HCFA administrator. "In one of my first conversations," he reports, "some of the staff said, 'If you want to, you can avoid collecting {this kind of information} so it won't be aggregated. It won't exist.' Within a couple of weeks, I decided this was something I wanted to do, not because it's legislatively required but because it's right."

He was backed by his boss, Secretary of Health and Human Services Otis Bowen, a physician. Both are conservatives who viewed public knowledge as essential to the competitive health care system they believed medicine must become.

This time, each hospital's mortality statistics were carefully analyzed. The final figures were adjusted for age, sex, medical diagnoses, other illnesses that might affect a patient, previous hospitalizations and whether the patient had to be moved from another hospital.

In this way, statisticians tried to take severity of illness into account. Still, "I think they pick up very little of the severity," says Brook, one of the study's architects.

It is largely on this basis that the release of the hospital report last week was opposed as "invalid," "meaningless," "misleading" and even "dangerous" to patients by a solid phalanx of medical and hospital groups.

Dr. Marvin Schneider of Wheaton tells of one case classified as "low-risk heart disease" in the federal report on Holy Cross Hospital in Silver Spring. It was the case of a man, 76, with cardiovascular hypertension.

"That was his initial diagnosis," Schneider says. "But he also had a pulmonary embolism {a blood clot in his lungs} and congestive heart failure. He died of a cardiac arrest. It was called low-risk heart disease, but that's not really what he had. Every one of the hospital's cases {in the federal report on low-risk heart disease deaths} goes the same way. The coding doesn't really reflect what's going on with these patients."

"We do not have a perfect measure of {medical} quality," Roper concedes. "But the perfect should not be the enemy of the good."

The nation's PROs have already created growing pressure on hospitals to improve care. HCFA will ask them to pay special attention to the hospitals with high death rates.

"What I hope" took place in such hospitals, Roper says, is that the morning after the report, there was "a medical staff meeting to say, 'Let's find out what happened.' "

He says he believes this is happening, and "I've already heard about one hospital bearing down on a surgeon who they discovered had extraordinarily bad results."

All this is part of the government's effort to put pressure on both doctors and hospitals to crack down on substandard medical care. There is a long way to go in disciplining, reeducating or dismissing incompetent physicians, Wolfe says, but "the more disclosure, the greater the pressure to do what is necessary."

He reports that in fact doctors -- or some doctors -- have been among the main group seeking data from some PROs about their own hospitals. "No doctor," he says, "would like to keep admitting or referring patients to a hospital that has a much worse record than another hospital he or she could use."

Many doctors have staff privileges at only one hospital, however, so they may be reluctant to discourage their patients from using it. A patient who wants all the facts about competing hospitals may have to ask more than one doctor or the area PRO or look up the federal data. {See box, page 7}.

Roper promises more facts on medical care to help consumers make choises: an annual release of hospital data, including, he hopes, better measures of severity of illness, reports on treatment results in patients who remain alive -- most patients do live -- and results in specific kinds of surgery, not just broad illness categories.

Within a few years, he adds, HCFA hopes to create still another medical information revolution: reports on the performances of individual doctors, a subject on which there has been an even greater cloak of secrecy.