A READING of history, at least as prepared by Washington's newspapers, suggests that any D.C. snow a bit heavier than a dusting has an effect similar to the fabled neutron bomb: something that removes all people but leaves the buildings intact.

Consequently, media wags were somewhat mystified when, on January 8, nine inches of snow showed up and the city didn't break out in hives. Even the plows arrived on time. In fact, the only thing that shut down was the Government, which naturally shares its historical perspective with the papers. According to at least one widely circulated press account, the reason that everyone was so prepared was that this particular snowstorm was "easily forecast" -- as opposed, I suppose, to those of last year.

I love it. According to the oxymoronic Popular Wisdom, when the Weather Bureau, Gordo or Bob screw up, it's because they're bozos. When they get it right, it must be "easy." This time, it must have been especially so because everyone was crowing about the possibility of Thursday or Friday snow as far as five days ahead of time.

Big Snowy Deal. The fact is that both of January 1987's so-called "surprise" blasts were also seen that far ahead. As Casey Stengel would say, you could look it up. Check back copies of the National Weather Service's Weather Wire. What was different this time was that people finally decided to believe in the long-range forecast, a product which has quietly and steadily been improving in the last decade.

But back to the "easy" designation. Sure: The computer burped out an Atlantic Coast cyclone in the right position for snow five days ahead of time. It does that regularly and with a number of false alarms. The trick lies in determining exactly when El Computo is correctly crying fire in the crowded theater of Washington.

That requires the human element, and brings out the ugly truth: An "easy" snow forecast is about as reliable as a "sure thing" at the track. The fact is that racetrack favorites tend to win about one-third of the time. Heavy snow forecast between 24 and 48 hours tends to show up about half of the time, but that figure must be adjusted downwards for the number of times it shows up when it's not supposed to, like last Veteran's Day.

In fact, making the right forecast, particularly in a snow situation, is a combination of science and an IQ test.

Consider the magnitude of the problem: Somewhere between 24 and 48 hours ahead of time, the forecaster is expected to pinpoint within 100 miles the position where a low-pressure system will find itself along the Atlantic Coast -- when it's only just barged ashore at Malibu! That's because 100 miles is the usual width of a storm's heavy snowprint. To the south and east will be rain, while to the north and west it's flurries.

Five Days And Counting

Here's what really happened last time: On Sunday and Monday, Jan. 3 and 4, the National Meteorological Center at Suitland interpreted the U.S. and European 3-to-5-day-range forecast models and concluded that a moderately strong low-pressure system would track south of a very cold airmass already settling on the East Coast.

Just forecasting where the low is going to be isn't sufficient, though. Often a storm will be in exactly the right position, but it might be a litle too warm (Christmas Eve, 1986) or a little too dry (January 3, l988). In fact, for heavy snow to fall over Washington, lots of of mathematical and physical ducks have to line up right in a row. If even one of them is out of line, the entire storm quacks up.

In order to snow heavily, the 5,000-foot temperature should be near (but not above) freezing. Warmer temperatures bring rain or may produce the atmosphere's least aesthetic product: sleet. But if they're colder than, say, 25, the whole shebang is probably too dry to produce much of anything.

The depth of the bottom half of the atmosphere should be between 17,350 and 17,800 feet. This depth reflects the mean temperature of the layer in which prospective snow clouds will develop and precipitate. Because warm air expands, the warmer it is, the deeper this layer. If the layer's too shallow it may be the proverbial "too cold to snow". All this is about as intuitive as that old equation from college chemistry known as the Ideal Gas Law, namely, that P V = nR T. Translation: Hold the pressure constant by always measuring the bottom half of the atmosphere (by weight), and then volume varies directly with temperature.

There's more. Impress the crowd at your next power lunch with these babies: The vertical velocity of the bottom layers of the atmosphere should be greater than +.00015 inches per second and the vorticity (spin) at 18,000 feet should be greater than .0000002 radians per second in the positive sense. These numbers are measures of whether or not there's going to be enough upward motion to produce respectable snow clouds.

Don't forget the position of the low-pressure center: Its track shouldn't be further inland than Virginia Beach and it better not be more than 200 miles out to sea.

Finally, the computer had better indicate the amount of liquid precipitation will be enough to mound up a half a foot of snow or so. And it better be right: With a normal snow-to-liquid water ratio of 12:1, missing by only a quarter-inch of rainfall is the equivalent of a three-inch snow forecast error -- or enough to bring out the brickbats from Virginia gentlemen. Nobody gripes about such an error in the summer, even if the quiche gets wet.

Snow Watch: T Minus Three

Anyway, by Tuesday, the three National Weather Service computer programs used by virtually all public forecasters began to kick in. These are state of the art products that are at the core of the daily forecast. Unlike the 3-to-5 day versions, they explicitly calculate all of the arcane variables, such as vorticity and vertical velocity. If the guy on your TV doesn't use them, he probably runs a dowsing business on the side. I'll bet his forecasts aren't "easy."

Programs? Why the plural? After all, it's only one atmosphere; and assuming nothing particularly spiritual is going on, we should be able to predict its state, say, two days ahead of time simply by being good physicists. Or so it would seem if we had perfectly good input data and perfectly good physics.

We have neither. So approximations have to be made. And the various forecast models essentially differ by which corners they choose to cut. These range from problems with the sparse nature of the input data, to computational constraints and approximate parameterization of incompletely understood physical processes.

Until recently, the most popular model was the LFM (for Limited-Area Fine Mesh), a beast that chewed up massive amounts of computer time to calculate how all those arcane variables should change each time the clock ticks its way through 48 hours. In an attempt to partially compensate for data limitations and the fact that many weather phenomena -- like tornado-producing thunderstorms -- are smaller than the LFM's resolution, its replacement "nests" it calculations at different geographic scales (grids) and is therefore acronymed NGM, for Nested Grid Model.

When it first came out, the NGM overforecast a handful of Virginia snowstorms, and students immediately translated its mnemonic as the "No-Good Model." It's since been adjusted to not precipitate so enthusiastically over the Nation's Capital. That's one of the reasons that the stuff that comes out of the facsimile machine is called computer guidance and not Compu-Pravda.

Another version, the Spectral, expresses the various parameters in a mathematical fashion that is a bit more applicable to the forecast problem.

(Whoa! Lest we alienate the Weather Service, we better tell the bright side right now: Weather forecasts are better than they have ever been, they are constantly but oh-so-slowly improving and being improved, and the garden-variety 12-48-hour forecast is a damned good product that is simply unappreciated by some Philistines.)

Unfortunately, the snow problem is much more difficult than predicting "partly cloudy with a high around 50." And when the snowcast busts, everybody notices. But do you really give a darn if the 48-hour temperature forecast is five degrees off?

And no matter how you jimmy the computer, you'll never get by the fact that there aren't enough evenly distributed input data. Example: Ground-based upper atmospheric measurements are further apart, on the average, than the normal width of a heavy snow band. And for over 70 percent of the earth -- the oceans -- data are just about nonexistent.

Nor is there enough computing power currently available to model mountains that are much better more than uniform slabs. Want to double the resolution level of a forecast model? Make it more realistic? No problem! It just takes exponentially more computer time. Don't mind the fact that some of the world's biggest machines are already dedicated to run the forecast.

Here's a shocker: Let a good forecast model, like the one out at the National Center for Atmospheric Research, run for, say 40 days, and it doesn't simply blow the forecast by calling for a little snow when the temperature's 50 degrees, nosiree! In fact, if you leave it to run that long no matter what day's input data you use, it will settle on an earth that's about seven degrees colder than today's. That happens to be the difference between now and the last ice age. As we say in the vernacular, "there appears to be a problem in the physical parameterization of some important processes." (Obviously, if we knew exactly which ones and why, this problem wouldn't be there.)

Parenthetically, we should note that this type of failure at ultra-long range isn't just endemic to weather-forecast models. Climate models -- the kind used to predict conditions somewhere around 2030 -- are trimmed-down versions that "time-step" a lot longer and might not worry about daily weather. But they suffer from the same problems of computational limitations and incomplete physics. Example: A November, 1986 issue of Science, said maybe we ought to point out in public that some climate models require that solar output be raised around 8 percent over its real value (equivalent to moving the earth a couple of million miles closer to the sun) in order to not produce ice-age temperatures.

The Calm Before the Storm

At any rate, while the LFM, the NGM and the Spectral are more reliable than the 3-to-5 day versions, their useful life is somewhere between 48 and 72 hours. And sometimes they're not all the same.

Surprise: For this storm, each one said something different. The NGM built up a decent low in central Georgia and tracked it over Virginia. That's a prescription for snow changing to freezing rain, sleet, and a mess running down the gutter that really isn't too disruptive. The LFM took a low pressure system and ran it out to sea before it could clobber Washington. And the Spectral produced only the weeniest of cyclones but ran it on a track that could produce a lot of snow, given the dome of cold air entrenched east of the mountains.

Making a decision based upon such conflicting results is far from "easy". In fact, it requires a degree of expertise, human expertise, that the computer simply does not have the imagination to calculate. Anyway, by Wednsday afternoon--24 hours before the snow would begin, the human interpretation was that it was going to be cold enough near the earth's surface to squeeze out somewhere between 12 and 16 inches from that pint-sized cyclone, with the max over central Virginia.

That whopping amount never made it to public forecasts for a number of reasons, the most 'easy' being that "something always happens to screw these things up".

Indeed it did. Thursday morning dawned with a streak of heavy snow--big time amounts of over a foot and accumulation rates of over an inch an hour--thudding down in of all places, northern South Carolina and central North Carolina. Extrapolation of that track took most of the precip out to sea off of Hatteras. Even worse, much of the northern half of Virginia awoke to hazy sunshine and a barometer stood at less than a quarter of an inch under 31. That's somewhere to the right of "fair" on the dial.

Imagine the scramble over the "easy" forecast. With the Thursday noonday sun still casting shadows on the Mall, the NGM now predicted the vorticity duck at only .00000016 radians/second, which is too little to gin up a blizzard; and the LFM was even worse. Weather weenies spilled coffee over each other's nerd-packs while the vertical velocity fell. And snow continued to spread towards Hatteras, away from Washington.

We'll never be able to calculate the lost calories, the amount of sweat, or the elevations of blood pressure that occured as forecasters tried to wrestle with all of this on Thursday. A number of forecasters almost bailed out completely, and I know of at least one computer terminal room that had all the atmosphere of a funeral without flowers.

So why did the forecasters stay with it? Because conflicting computer programs notwithstanding, Nature bats last. First, a pocket of moderate snow began to spread northward from Roanoke around noon. Second, the atmosphere gave the weakest of signals that yes, indeed, a low pressure system was going to form off South Carolina after all. At this point, the human factor intervened: Don't give up the ship. Damn the vorticity! Full speed ahead!

Indeed the favorite horse -- six to nine inches of snow -- paid off on Friday. Call it "easy?" Next time the favorite in the morning line shows up in bandages at post time, bet him. You'll probably win. No problem. It's easy.