IF PRESIDENTIAL elections are the heart of the political process in this country, political polls have become the chief instrument through which that heart's heat is measured. In a short span of time, roughly since the end of World War II, scientific polling has accounted for sharp changes in the nature of presidential campaigns.

It would be disconcerting, to say the least, if the polls, and especially some flawed polls among them, affected the outcome of elections instead of merely tracking them. That problem has been talked about by politicians and academics for years. But with the great burst of public polls in the 1976 campaign, it now looms much larger.

Decades ago, candidates recognized the efficacy of polls as a tool to shape their strategy. Through private polls conducted for them they could quickly spot sources of their own and their opponents' strengths and weaknesses, recongnize trends and determine the saliency of issues. On all levels in the political arena, from the presidency on down, polls became vital.

More recently, and with a rush in 1976, the news media seized on public polls as a scientific, accurate and impartial device offering the possibility for sweeping and unbiased assessments that could not otherwise be achieved during a political campaign. In 1976, in addition to the regular George Gallup and Louis Harris polls, there was a proliferation of other public polls. The three commerical television networks and public television were in the political polling business; so were the Associated Press, the New York Times and many other leading newspapers, including, on occasion, this one.

By 1976, then, the application of scientific procedure to determine voting behavior had become dominant in the political process. Or so it appeared. The fact is that in political polling, as in other aspects of social science, human judgment, which is prone to error, sometimes rules over scientific procedure, and sheerly mechanical problems may obstruct the entire process. Consequently, during the election campaign, some of the best known public pollsters drew conclusions that were on occasion not very accurate, hampered by practices that were not always very scientific.

The peril in distribution of faulty poll findings, of course, is that voters or political activists may find their actions directed by incorrect or misleading information. Since the extent of such a response is difficult, if not impossible, to measure, the effect of flawed polls on an election campaign presents an exasperating problem. No one can really determine how serious it is.

Furthermore, even polls that are not intrinsically flawed may lead some people to draw improper conclusions. According to Semour Martin Lipset, a political scientist at Stanford University, a round of polls conducted in August, after the Democratic convention but before the Republicans chose Gerald Ford as their nominee, altered the course of the election campaign.

These polls, including ones by Gallup and Harris, showed Jimmy Carter with a huge lead of more than 30 percentage points over either Ford or the other prospective nominee, Ronald Reagan. "These polls were unrealistic," Lipset said in a telephone interview. "Some have argued that the need for them never existed. There is no question that they had an effect on the campaign."

Lipset, one of the nation's most renowned experts on survey research, said these polls measured an artificial situation that was certain to disappear once a Republican nominee was chosen. Although the pollsters pointed that out, the findings, nevertheless, made some in the Carter camp overconfident. In addition, and perhaps more importantly, Lipset said the findings may have kept some Democrats from supporting the Georgian while drawing others to him.

Carter, running as a political outsider, never did receive support from many Democrats. Some leaders supported "stop Carter" movements in the primaries; in most areas party workers failed to respond to Carter's request for a massive voter registration drive. Lipset reasons that a number of them disliked or distrusted Carter but would have worked for him for the sake of the party, had they felt their help was needed.

"Other people supported him because he seemed certain to win. The image of a sure, overwhelming Carter victory brought people seeking jobs or influence to Carter for opportunistic reasons, particularly young 'new politics' supporters who subsequently managed to secure considerable influence in the transition force."

Lipset says Republicans were equally affected. "Going into the Kansas City convention many were sure they were going to lose, including Robert Dole," the vice presidential candidate, Lipset said. The conviction that they would not win undoubtedly hurt the Republicans, Lipset said.

These early polls, then, by measuring an unrealistic situation, set the tone of a distorted campaign in which a newcomer to national politics, with little support from his own party, was seen ast he unbeatable opponent of an incumbent President.

Louis Harris himself has no problem with these polls, saying he doubts that they led to any overconfidence in the Carter campaign and that "what Carter did [in the campaign] was more important than what the polls showed."

Lipset and others complain about the timing of these polls, not their accuracy at the moment they were taken. Since pollsters point out that their findings refer only to sentiment at the time a poll was taken, there may have been nothing technically wrong with them despite their possible effect on the entire campaign. The problem, if there is one, lies in how these polls were received.

But other polls throughout the campaign were clearly flawed in all sorts of ways. Some of the most common errors were these:

Failure to report the real findings of a poll. On at least three occasions, the Gallup firm, for one, reported partial poll results or analysis that, on later examination, proved to be at variance with the polls' complete findings. This problem, in fact, seriously damaged the quadrennial centerpiece of the Gallup Poll, its final pre-election survey.

Drawing improper, sometimes wrong conclusions from apparently reliable data, or departing from data altogether.

Interpretation is probably the most difficult aspect of polling and analysts frequently disagree about the significance of the same set of findings. But at least twice during the campaign the Louis Harris firm clearly either stretched data too far or drew demonstrably wrong conclusions. Gallup on one occasion made a statement that apparently never came from data at all.

Poor poll design. Some widely publicized polls simply were guilty of faulty research design. One repeated example has to do with national telephone polls that failed to take into account the fact that poorer and younger people, who are most often Democratic voters, are harder to reach on the phone than are Republicans. Such polls, unless properly adjusted, are thought to automatically have a slight Republican bias built into them.

Through sometimes close, sometimes casual poll watching, this writer has noted perhaps a dozen instances in which public polling agencies made errors of these kinds and consequently put out either wrong or questionable information that could have affected voters' perceptions of the campaign. In all probability, there were other instances as well.

All the flawed polls noted seemed to share one peculiar characteristic: For some reason, or combination of reasons, the conclusions drawn in them seemed more favorable to the candidacy of Gerald Ford than the evidence warranted. The Turnout Question

ONE SUCH incident occurred on Oct. 27. By then, the race was generally considered so close that the size of the turnout was thought by some political observers to be the controlling factor in the outcome. The conventional wisdom was that a low turnout would help Ford, a high turnout would elect Carter.

All year, Carter had emphasized the need for a massive voter drive in the hopes that the least likely voters, the poor and the less well educated, part of the natural Democratic constituency, would flock to the polls. On the Wednesday before the election, George Gallup spoke along with Louis Harris before a crowd of more than 400 at the National Press Club in Washington. Gallup noted that, in the past, high turnouts had often helped Democratic presidential candidates. However, he said, his data suggested that in 1976 a high turnout "at this point . . .is helping President Ford."

Gallup did not elaborate on the remark, but it was widely disseminated by the leading news services; it was broadcast live on the public radio network and it was included in at least one network TV news report that evening. There is no way of determining what effect the statement had. But common sense dicates that, coming as it did from the dean of popular scientific polling, it was not the kind of statement that would drive Democratic Party workers to pursue recalcitrant voters.

Later on, Gallup reportedly backed away from his assertion, saying he could not remember having made it. Gallup was out of the country as this article was being written and could be reached for comment. However, the president of the Gallup firm in Princeton, N.J., Paul Perry, says no Gallup data ever suggested that a high turnout would help Ford. Perry, who is highly respected in the polling profession, said that Gallup "may have misspoken."

Told that Gallup's remarks did not appear to be a slip of the tongue, that he had made it a point to contrast the possible effect of a high turnout this year with its effect in past years. Perry said, "I can't imagine what he was thinking about."

Part of Gallup's troubles seemed to stem from problems with the telephone. In recent years, as more and more organizations have taken to telephone polling as a speedy, relatively inexpensive polling method, Gallup has largely stayed with what he regards as superior, more reliable door-to-door interviewing.

In 1976, with competition from so many in the news media, the Gallup firm began to use the telephone more than in the past. While most of the actual polling continued to be done by personal visits, Gallup field workers phoned in the results of their interviews instead of putting them in the mail.

In the second half of August, the Gallup firm conducted two polls on the presidential race. When the great bulk, but not all, of the results had been phoned in, the findings were released. Both times the failure to wait for all the findings to come in resulted in the Gallup firm's premature release of polls in which Carter's actual lead was understated by three percentage points.

Perry is baffled by these discrepancies. "We had a fairly complete return," he said. "It was close enough to what we expected so that we felt there was no problem. I'm not exactly sure why there was as much difference as there was."

Instead of issuing a correction after making the two errors, the Gallup firm simply printed the proper figures later on in tables showing the trend in the campaign. Philip Meyer, who conducts polls and writes about them for Knight-Ridder newspapers, and who has been in the fore-front of a drive to make the news media more active in polling, noted the substitution of the new figures and wrote an article critical of Gallup. After that, the Gallup organization publicly acknowledged the mistakes.

Perry said that the Gallup firm, burned by its experience, went back to its normal procedures in what Gallup calls his "major mid-campaign survey," conducted last year just before Oct. 6, the date of the second presidential debate. The second debate seemed at the time, and still does seem to some, to have been among the most crucial single events in the 1976 campaign.

Carter had gone into the general election period with his huge, if artificial, lead over Ford. Steadily and quite sharply, that lead eroded. Then, in the second debate, which dealt with foreign policy, Ford created a sensation by saying that Eastern Europe was not under Soviet domination. There were those who said that debate had stopped Ford cold in the midst of his remarkable comeback. Some observers said Ford's blunder had crippled him once and for all.

For days, news coverage centered around those remarks. On Oct. 13, a week after the debate, Ford was still trying to recover from criticism. That day the pre-debate Gallup results were distributed. Ford, at a campaign rally in New York, was able to cite the Gallup poll and exclaim, "We're even."

His statement was no exaggeration of the Gallup report, which credited Ford with having "staged the greatest comeback to date in the history of public opinion polling." The apparenlty hastily written Gallup release, distributed to newspapers with many words crossed out and hand-written notes in the margins, was headed, "Race Draws Even As Carter Slips Badly in South."

The release noted that "Carter currently edges Ford, 47 to 45 per cent," a decline from an earlier 50-37 per cent lead after the Republican convention. The slim 2 percentage point margin was statistically insignificant. It meant that the candidates were so close that no one could really say who was ahead.

The timing of the poll suggested that Ford's conduct in the second debate had not hurt him at all; that, however inexplicable it may have been, the second debate actually helped the Ford campaign.

The poll, of course, was not really "current." On the third page of the release, it was noted that it has been conducted before the second debate and that "early returns from a nationwide survey conducted last weekend, following the second debate, indicates that Carter may be recouping his losses."

A Gallup executive told this reporter that day that, while figures from the newer poll were "not hard," it appeared that Carter had regained a 10-point lead, and that final results would be collated in a day or two. When they were released, they showed Carter with a 6-point, 48-42 advantage.

Had Gallup waited the brief period to gather his later findings, he might still have written about Ford's great comeback - but he would have had to state that it may have been stopped short by the second debate. As it was, the Gallup firm equipped Ford with a political weapon, at least for a moment, by releasing dated information and stating that it was current.

Since then Gallup has expressed outrage at criticism of the release of this poll, repeatedly stating that the newer data had not been completely analyzed. One can sympathize with Gallup here. Criticized for having given out incomplete results earlier in the campaign, he was now criticized again, in part for not giving them out.

Alex Gallup, one of George Gallup's sons and a vice president of the Gallup Organization, said last week that "perhaps with hindsight one might do it differently, putting greater emphasis on the interviewing dates, or withholding release of the earlier of the two surveys. However, such a course could be subject to criticism also, to the effect that earlier survey results were being withheld." The Final Survey

PROBLEMS CONTINUED through the final Gallup poll, which was issued to newspaper clients for use on the Monday before the election. Interviews had been conducted until noon Saturday and, with what must have been a mad rush, Gallup interviewers all over the nation immediately began calling in their results - by telephone.

The Gallup firm is highly regarded for a sophisticated screening system it has developed over the years on which its final pre-election poll onclusions are based. Some nine questions are asked that Gallup analysts use to determine whether individuals actually will vote. That screen in the past has helped account for Gallup's increasing ability to come up with a final poll that virtually matches the outcome of the election.

But because of the problems in getting results by telephone earlier in the year, interviewers were instructed to phone in answers to only two questions dealing with an individual's likelihood of voting. "If we hadn't had these previously bad experiences, we would have had them all phoned in," said Paul Perry, "and we would have been better off."

Perry said that, in past elections, the two-question screen had been almost as accurate as the full screen. This time it wasn't. Gallup's final poll gave Ford 49 per cent of the vote, Carter 48 per cent, independent candidate Eugene McCarthy 2 per cent and all other candidates the remaining 1 per cent. The Gallup release pointed out that these figures should not be taken as a prediction. But at the same time, the release noted how close the Gallup figures have been to the actual vote over the years.

The final popular vote tally was 50.1 per cent for Carter, 48 per cent for Ford, about 1 per cent for McCarthy and the rest for other candidates.

Consequently, the Gallup firm's last report was more off the mark in 1976 than it has been in any close presidential election since 1948, when Gallup, along with most others, stopped interviewing too long before Election Day and released polls showing Thomas E. Dewey ahead by a large margin over Harry Truman.

According to Perry, had the Gallup organization been able to get its full nine-question screen to the computer in time, "We would have been better off. We would have been very close." "How close?" he was asked. "Very, very close." "Right on the money?" "Yes, except for being slightly high on McCarthy."

Hubert Humphrey has complained that poor poll showings seriously damaged his 1968 campaign against Richard Nixon; aides to Humphrey point ot one Gallup poll in particular, taken in August of that year. They note that it showed Humphrey in substantially worse shape than did other polls at the time and point out that the Gallup poll is so powerful that after the poll's release campaign workers became dispirited and campaign contributions dried up.

Television stations were demanding cash in advance for political advertising, a routine policy, and without funds Humphrey was unable to schedule TV time to bring his message before the voters.

Humphrey, nevertheless, finished strong, only one percentage point behind Nixon. The Minnesota Democrat is quoted in a recent, controversial book on polling, "Lies, Damn Lies and Statistics," by Michael Wheeler, as making this complaint about polls (and not only Gallup's):

"The polls were like water in the gas tank; we just didn't have that forward thrust. If I could have come out of the convention a few points, maybe even a little further, behind I think I could have won."

Traditionally, only losers complain about damaging polls. Pat Caddell, who conducted polls for Carter in the 1976 campaign, said there was some concern throughout that, "with public polls playing a greater role in the campaign, everyone would suggest that the whole thing needs to be looked at."

Caddell said those directing the Carter campaign were upset with the timing of the Gallup Oct. 13 poll bu that he personally doesn't "believe that there was any malicious intent at all."

Robert Teeter, who polled for Nixon in 1972 and Ford in 1976, agreed, saying, "I don't think that Gallup, or Louis Harris either, has a conscious plan" to see that polls work for or against a particular candidate. Teeter said, however, that he felt that Harris, the nation's second best known pollster, "drew some fairly extreme conclusions from the date he got." Harris's general response to such criticism is, "I call them like I see them."

Harris said that Carter criticized him personally during the campaign, charging that he was unobjective. "But that always happen," Harris said. "The Ford people complained also, asking why I went back to the Nixon pardon all the time." Surge of Support

HARRIS PRODUCED a syndicated column and appeared regularly on ABC-TV last year. On Oct. 18, the lead on a Harris news release read this way: "Although Jimmy Carter won the seond debate by a thumping 54-30 per cent margin, President Ford continues to gain in this election and now has narrowed the presidential race to a mere four-point lead."

The Harris poll these comments came from was conducted Oct. 7 to 11, immediately following the second debate. Again, that was a time when many observers felt Ford's performance in the debate had blunted his comeback. The Harris news release, appearing only five days after the dated Gallup poll, reinforced the suggestion that the second debate had not hurt Ford at all, that the President was continuing to gain on Carter.

In actuality, the data available to Harris did not suggest that Ford was continuing to gain. That was true only under the standards imposed by Harris, who compared Ford's position on Oct. 7 through 11 with his position just after the first presidential debate in September. That early on, Ford was still far behind in most polls, nowhere near the acme of his comeback drive.

Ford was perceived as continuing to gain on Carter only because Harris had not conducted a poll at a crucial time, the moment before the second debate when Gallup had shown the race to be virtually dead even. Had Harris taken cognizance of that Gallup poll, he would have been forced to state something to the effect that "Jimmy Carter has won the second debate by a thumping 54-30 per cent margin and has abruptly stopped President Ford's sharp gains." That, of course, was opposite from the conclusion Harris drew.

Harris readily conceded to this reporter that bynot polling just before the second debate he may have missed a strong surge of support for Ford. He agreed that, if Gallup's findings were correct - and he did not question them - the his own conclusions, released at a period of peak voter interest, were wrong. "We were not in the field at the time," Harris said. "I go by what I've got."

Later on, in the final days of the campaign, Harris stated that, on the basis of interviews with 3,000 people nationwide, he saw Ford as the "favorite to carry a majority" of the six largest Northern states: California, New York, Pennsylvania, Illinois, Ohio and Michigan. As it happened, Ford carried half those states, not majority. But had he carried them all, it would have been only accidentally in conformance with Harris's prediction. For the nature of a poll is such that, while statements may be made comfortably about the entire population under study, virtually nothing at all may be determined about small parts within that population.

The rationale for this is simple: the fewer the persons interviewed, the less accurate the poll. With 3,000 people interviewed nationwide, Harris had to have interviewed about 300 in New York, a like number in California and something under 200 in each of the other four states. In all, Harris said, he interviewed 1,221 people in the six states.

A sample of 300 yields a margin of error of some 5.7 percentage points in either direction, meaning that a candidate must be ahead by more than 11.4 per cent before he can be said to appear to almost certainly claim victory in that state. A sample of 200 yields a margin of error of about 7 points in eigher direction, meaning a candidate needs a 14 per cent lead before he can be said to have that state. Ford would have had to have huge leads in four of the states, then, before the data could be said to show him the favorite to win the majority of them.

Harris stated that Ford had a 45-41 lea in those six states overall. The actual outcome in them as a group was 49 per cent for Carter, 49 per cent for Ford and 2 per cent for third party candidates. Those figures are within the margin of error for a poll the size of Harris's. The poll, that is, was not "wrong." Harris had simply pushed his material too far, demanding that it do things it was not capable of.

When asked, Harris agreed with that assessment, saying he had misstated his findings, that he should have said Ford was the favorite to win the popular vote in the six-state cluster.

Any conjecture that this error had a bearing on people's votes seems far-fetched. It came at the very end of the campaign and was just one among dozens of eleventh-hour assertions. But the error is, nevertheless, indicative of the kind of mistake polling firms may make by not staying close enough to their data. A Margin of Victory?

SEVERAL OPINION analysts, including pollster Burns Roper and Philip Meyer of Knight-Ridder newspapers, feel that just that type of error may have been largely responsible for the possible phenomenon of flawed polls seeming to favor Ford. They maintain that, as Carter's lead dwindled, pollsters may have been caught up in the dynamic of the campaign to the point that their reading of the data suffered. "Everything showed the gap closing; there may have been an assumption that it was continuing to close," Roper said.

In Meyer's view, "If there were errors, they were on the side of conventional wisdom, which was that Carter was slipping. It is the herd instinct."

The occasionally flawed work by Gallup and Harris and other lesser known but quite active polling agencies represented only a small portion of poll findings released during the 1976 compaign. If some ended up being more favorable to Ford than they should have, and if they had some effect on voting behavior, it may be that they made the election somewhat closer - but it seems totally unlikely that they may have changed the outcome.

But what about the chances of bad polls affecting the outcome of future close elections? Polls seem bound to proliferate even more. Can polls really swing an election? The answer, taken from cautious statements of experts, seems to be eyes.

One of the most quoted studies of the problem was conducted by Joseph T. Klapper of CBS, a former president of the American Association for Public Opinion Research. Testifying before a congressional hearing in 1972, Klapper said:

"The electorate is large and heterogeneous, and I would suppose that some members of the electorate do occasionally succumb to both bandwagon and underdog effects. I would suppose these effects had to be very small and largely to cancel each other out, but I cannot prove this."

The "bandwagon effect" is the rallying of voters behind the candidate who is perceived to be the leader or to have momentum working for him. It is the most commonly suspected possible effect of polls on voters. Klapper noted that most politicans seem to believe there is a bandwagon effect and, consequently, they strive to emphasize poll results that put them in the most favorable light and play down unfavorable findings. The "underdog effect" is the opposite of the bandwagon effect.

Albert H. Cantril, president of the National Council on Public Polls, states that "the bandwagon theory has not been proven, but, on the other hand, it has not been disproven. The burden of proof is on the pollsters. They cannot disregard the possibility."

If polls have the kind of small effects that Klapper cites, then flawed poll findings released at crucial times in the course of a political campaign may have at least some bearing on the vote. And small effects, of course, can account for margins of victory in close elections.