One of the headlines on Richard Morin's Nov. 21 Outlook article about exit polling might have left a misimpression. It should have read, "Exit Polls Can't Always Predict Winners" rather than "Exit Polls Can't Predict Winners." (Published 11/22/04) ----- A portion of a television interview with Warren Mitofsky, co-director of exit polling for the National Election Pool, was incorrectly transcribed by the Public Broadcasting Service and subsequently quoted in a Nov. 21 Outlook article on exit polling. PBS reviewed the audiotape after the article appeared. The review found that Mitofsky said he had cautioned the TV networks and other members of the pool about suspected problems with early exit-poll results and told them "which states to ignore." He did not say the networks "chose to ignore" his caution. (Published 11/27/04)

It will be a few more weeks before we know exactly what went wrong with the 2004 exit polls. But this much we know right now: The resulting furor was the best thing that could have happened to journalism, to polling and to the bloggers who made this year's Election Day such a cheap thrill.

That's because the 2004 election may have finally stripped exit polling of its reputation as the crown jewel of political surveys, somehow immune from the myriad problems that affect telephone polls and other types of public opinion surveys. Instead, this face-to-face, catch-the-voters-on-the-way-out poll has been revealed for what it is: just another poll, with all the problems and imperfections endemic to the craft.

It's also time to make our peace with those self-important bloggers who took it upon themselves to release the first rounds of leaked exit poll results. Those numbers showed Democrat John F. Kerry with a narrow lead, which ignited premature celebrations in one camp and needless commiseration in the other -- until the actual votes showed President Bush had won.

If a few hours on the roller coaster of ecstasy and agony were all that anyone had to endure, only the political junkies would be interested in the whys and wherefores of the exit poll confusion. But the false picture had real impact: The stock market plummeted nearly 100 points in the last two hours of trading, and the evening news was replete with veiled hints of good news to come for the Kerry campaign. Since then, some disappointed and angry Bush-bashers have seized upon the early numbers as evidence of something amiss in the outcome. You can read it on the Internet -- the election was stolen, the early exit poll numbers were right.

But rather than flog the bloggers for rushing to publish the raw exit poll data on their Web sites, we may owe them a debt of gratitude. A few more presidential elections like this one and the public will learn to do the right thing and simply ignore news of early exit poll data. Then perhaps people will start ignoring the bloggers, who proved once more that their spectacular lack of judgment is matched only by their abundant arrogance.

It seems clear now that the 2004 exit polls were rife with problems, most of them small but none trivial. Skewed samples, technical glitches and a woefully inept question that included the undefined term "moral values" in a list of concrete issues all combined to give exit polling its third black eye in as many elections.

The sampling errors gave a boost to Kerry, who led in all six releases of national exit poll results issued on Election Day by the National Election Pool (NEP), the consortium of the major TV networks and the Associated Press that sponsored the massive survey project. (The Post received exit poll data as an NEP subscriber.)

In the first release, at 12:59 p.m. on Election Day, Kerry led Bush 50 percent to 49 percent, which startled partisans on both sides. That statistically insignificant advantage grew to three percentage points in a late-afternoon release, where it remained for hours, even as the actual count began to suggest the opposite outcome. It was only at 1:33 a.m. Wednesday that updated exit poll results showed Bush ahead by a point.

Even more curious numbers were emerging from individual states. The final Virginia figures showed Bush with a narrow lead. Exit poll data from Pennsylvania, which was held back for more than an hour, showed Kerry ahead by nine percentage points. The actual results: Bush crushed Kerry in Virginia by nine points, while Kerry took Pennsylvania by just a two-point margin.

In a review of 1,400 sample precincts, researchers found Kerry's share of the vote overstated by 1.9 percentage points -- which, unhappily for exit pollsters, was just enough to create an entirely wrong impression about the direction of the race in a number of key states and nationally.

It's hardly unexpected news that the exit polls were modestly off; exit polls are never exactly right. The networks' 1992 national exit poll overstated Democrat Bill Clinton's advantage by 2.5 percentage points, about the same as the Kerry skew. But Clinton won, so it didn't create a stir. In 1996 and 2000, the errors were considerably smaller, perhaps just a whiff more Democratic than the actual results. That suggests to some that exit polls are more likely to misbehave when their insights are valued most -- in high-turnout, high-interest elections such as 1992 and this year.

I learned early in my Washington Post career that exit polls were useful but imperfect mirrors of the electorate. On election night in 1988, we relied on the ABC News exit poll to characterize how demographic subgroups and political constituencies had voted. One problem: The exit poll found the race to be a dead heat, even though Democrat Michael Dukakis lost the popular vote by seven percentage points to Dubya's father. (The dirty little secret, known to pollsters, is that discrepancies in the overall horse race don't affect the subgroup analyses. Whether Dukakis got 46 percent or 50 percent didn't change the fact that nine of 10 blacks voted for him, while a majority of all men didn't. The exit poll may have under- or over-sampled either group, producing an incorrect national total, but the within-group voting patterns remain accurate.)

In practice, there are many separate exit polls, not just one. This year, there was a national one based on interviews at 250 randomly selected polling places around the country by Joseph Lenski and Warren Mitofsky under contract with NEP. Then there were separate exit polls in each state. The number of precincts sampled in these states ranged from 14 in Alabama to 52 in Florida.

In theory, the voting pattern in these precincts should reflect the national and statewide votes. If the exit poll results differ from the actual vote -- say, the sample precincts nationally showed Kerry ahead by three points while he ended up losing by three -- then something was wrong with the sample.

Perhaps the Democratic skew this year was the result of picking the wrong precincts to sample? An easy explanation, but not true. A post-election review of these precincts showed that they matched the overall returns. Whatever produced the pro-Kerry tilt was a consequence of something happening within these precincts. This year, it seems that Bush voters were underrepresented in the samples. The question is, why were they missed?

Mitofsky, the veteran pollster who co-directed this year's exit surveys, fears that Republican voters refused to be interviewed in disproportionately higher numbers, thus skewing the results. Perhaps they were busier than Democrats and didn't have time to be interviewed. Perhaps they disliked the media's coverage of Bush, and showed it by snubbing poll interviewers. Whatever the reason, Mitofsky warned the networks about the apparent Democratic bias mid-afternoon on Election Day -- a caution "they chose to ignore," he told Terence Smith on PBS.

If the snubbing theory is confirmed, it would not be the first time that Republicans are believed to have just said no to exit pollsters. Historically, exit polls have been more likely to err on the side of Democratic candidates, though this bias is usually small. In 2000, for example, the exit polls overstated Democrat Al Gore's share of the vote by more than one percentage point in about 20 states, while inflating Bush's share in just 10 states.

The relatively small number of precincts sampled nationally and in each state create other, subtler problems. While 50 precincts may be sufficient to accurately characterize the overall vote in a large state, it increases the odds of missing or under-representing the views of smaller subgroups. For example, the Florida exit poll in 2000 found that Bush and Gore equally divided the Latino vote statewide -- a finding doubted by many academics. They noted that the sample of precincts in that state did not account for heavily Cuban American neighborhoods in Dade County -- and thus missed precincts that went heavily for Bush. This year, the national exit poll finding that Bush captured 44 percent of the Hispanic vote, up nine points from 2000, also has been challenged over sampling issues.

There are questions that are more difficult to answer. How do we know the demographic splits are right? We assume they are because one key feature of exit polls is that the results of the completed survey are weighted to reflect the final actual vote. This adjustment has the effect of fixing a number of other, smaller problems created by under- or over-sampling support for one candidate or the other.

But weighting may not fix all the problems. For example, one question in the 2004 exit poll asked people to rate their feelings toward the candidates. What if enthusiastic and angry voters disproportionately agreed to participate in the poll while those less emotionally engaged said no? The result would incorrectly suggest an emotionally charged electorate; weighting the data does nothing to fix this problem.

That final weighting also is central to the controversy over real or imagined electoral irregularities. It's true that exit poll results available on CNN and other networkWeb sites late into election night showed Kerry with that now-infamous three percentage point lead, an advantage based exclusively on exit polling and a pre-election survey of absentee voters. When those survey results were statistically adjusted in the wee hours of Wednesday to reflect the actual vote, Bush suddenly -- and seemingly mysteriously -- jumped into the lead nationally and in several key, closely contested states.

But this sort of final adjustment is done on every exit poll. Most of the time, it doesn't matter because there's a clear winner, and the numbers move up or down slightly while the order of finish remains the same. But because this election was so close, the weighting had the effect of flipping the winner and igniting the fevered imaginations of the Michael Moore crowd.

Compounding and amplifying the exit poll woes this year was that the first wave of results, available moments after 1 p.m. on Internet Web sites everywhere, shaped the way journalists were thinking, at least through much of the afternoon and early evening. The first rounds exert a particularly strong influence on broadcast journalists because they use them to develop story lines ("Kerry won a majority of female voters, but Bush did better among women than he did four years ago . . .") before the evening news.

Last Thursday, the National Election Pool board took steps to minimize this problem next time. It voted to delay release of the first wave of exit poll results until after 4 p.m. That may or may not minimize the damage done by bloggers because those numbers will still leak out and cause mischief. Ironically, the first release of data shortly before 1 p.m. that showed Kerry leading by one point was closer to the final result than the 3:50 p.m. release, which showed the Democrats leading 51 percent to 48 percent. That doesn't mean the early release was more "accurate." Early data are not necessarily a reliable predictor of the final outcome because different types of voters tend to cast ballots at different times of the day.

In a perfect world, early exit poll results would be treated just like early vote returns or the score at the end of the first quarter of a Redskins game. In a gubernatorial contest, the news media have learned not to get too excited about early returns from, say, Northern Virginia; we know from experience and common sense that partial returns from a fraction of the electorate are an unreliable guide to the outcome.

Sometime soon, I suspect that the electorate will come to see these early exit poll results the same way. The view of exit polls also will change, from blind awe and acceptance to respect tempered by a healthy skepticism. Thanks to the 2004 election and my new best friends the bloggers, we're closer to that day.

Author's e-mail:

morinr@washpost.com

Richard Morin is The Post's director of polling and writes Outlook's Unconventional Wisdom column. His experience with national exit polling goes back to the 1988 presidential campaign.