I've been going through a flood of mail triggered by two columns on academic testing, and I'm satisfied there is wide agreement on two points:

Americans would like to see both better schools and better educational opportunities for children. We want to be fair -- to pass out goodies such as seats at prestigious universities on the basis of individual deservedness.

The trouble is that we keep thinking tests can take us to these worthy goals. They can't -- and not merely for reasons of "culture bias" or unfamiliar language or the other unfairnesses we spend so much time arguing about.

Consider the controversy about Florida's program of awarding vouchers to parents of children at failing public schools. Leaving aside, for the moment, the debate about vouchers themselves: How do you determine which are the failing schools? Why, by testing, of course. But because they haven't figured out a way to test schools directly, they resort to proxy: They test the children. If too many children at a particular school fail to meet some state-approved minimum, the school is on notice. If the failure rate is unacceptably high a second year (during a four-year span) the school gets an F, and the kids get vouchers that allow them to go elsewhere.

It sounds fair enough, and it would be if the tests were capable of measuring not merely what the children know but what the school has taught them. Here's what I mean: If you measured the physical health of children at a well-baby clinic and then, using the identical test, measured the health of patients at a hospital for terminally ill youngsters, obviously the first group would score higher.

But would that tell you that the doctors and nurses at the well-baby clinic were doing a better job than their counterparts at the hospital? In fact, it would tell you nothing about the quality of medical care at either facility. What the Florida experiment overlooks is that some children are academically sick -- and others academically robust -- for reasons that may have little to do with what happens at school.

A variation of this problem applies to another controversy -- the Education Testing Service's reported search for a way to identify "strivers" -- minority students who score better than their socioeconomic backgrounds would have predicted. The debate has focused on whether special consideration for such applicants is fair.

Do "strivers" deserve special credit, even if their test scores are lower than those of their standard white middle-class counterparts? Isn't the point of the SAT and similar tests to apply a common yardstick so as to eliminate subjective considerations based on things such as race, sex and geography?

Gerald Bracey, a research psychologist who has been following the testing wars for many years, hits this one dead center:

"The myth of the common yardstick is silly on its face," he says. "The test might be standardized, but the kids who take it definitely are not. . . . A verbal SAT of 600 from Philips Exeter Academy is not the same as an SAT of 600 from South Succotash High. Still, people will go on yearning for something standardized across context and hoping the SAT meets this criterion."

Sometimes, of course, we don't care whether the tests are a "common metric" or not. If you want the fastest marathoner, the top-producing salesperson or the best-selling author, questions of background and gender go out the window.

But why should anybody care (at least during the admissions process) who the very top SAT scorers are? As a matter of fact, no one does. There's not a university in America that would fill a freshman class by having its computer spit out the top SAT scores among the applicants.

Here's what the debate is really about: One side believes that mechanical processes cannot tell us what we need to know about schools, students or applicants. The other side believes that introducing subjective judgment into the process introduces the possibility of unfairness.

Both sides are right. We'll never make sense of the matter until we embrace that simple truth and go on from there.

I said in a recent column that tests such as the SAT "frequently under-predicted the performance of black applicants." In fact, the SAT is more likely to over-predict black performance.