The simmering controversy over standardized testing -- most especially the Scholastic Aptitude Test taken by nearly 1.5 million college aspirants every year -- came to a boil a few weeks ago with the publication of a long-awaited study by Ralph Nader and Allan Nairn. The controversy could affect not only the future of SAT tests, but that of all standardized testing, beginning in elementary school.

Do the SAT tests -- one for verbal and one for mathematical aptitude -- in fact help college admissions boards make intelligent decisions about which students are best suited to their college? Or do they merely measure a collection of more or less irrelevant skills and social factors, perpetuate class biases and prevent large numbers of capable students from getting a higher education, all as Nader claims?

First it must be said that much of the controversy over SAT stems from confusion over the meaning of what it is measuring -- aptitude. The SAT fact sheet now states clearly that it is "a test of developed ability, not of innate intelligence." But in the past this has not been made clear, and too many students have been allowed to believe that the aptitude that is being measured is a natural and immutable, rather than acquired, talent.

The Nader/Nairn report calls SAT "a three-hour gamble which can determine a life's pathway," implying that the results are largely up to Lady Luck. The Educational Testing Service, which creates and scores the tests, claims, on the other hand, that the tests are a valid tool for predicting how well a student will perform in college, as measured by the freshman-year grade-point average. Nader calls this claim "false and unsubstantiated." Who is right?

The answer is that SAT scores do help predict college performance. The extent to which they do is documented by the very evidence presented in the Nader-Nairn report, but concealed in the text. What the text says is that "inclusion of SAT scores in the prediciton process improves the prediction of college grades by an average of only 5 percent or less." The truth -- of which the report's authors are well aware -- is that SAT scores raise the accuracy of prediction by five percentage points, but improve the accuracy of prediction by 20 percent. Specifically, high school grades alone provide an accuracy of prediction of 25 percent; grades and scores together raise the accuracy to 30 percent -- a 20 percent improvement. This type of use of complicated statistics is rampant throughout the report.

Having attempted to minimize the valid contribution SAT scores can make to admissions decisions, the Nader-Nairn study exaggerates the role they do play. It argues that because of SAT scores alone, promising students, especially minority students, are frequently barred from getting a higher education. A great deal is alo made of the use of minimum cutoff scores, of how "a single point can be the differece between acceptance and rejection." However, a 1979 nationwide survey of 1,600 colleges and universities reveals that fewer than 2 percent of colleges and universities consider the SAT score to be "the most important factor" in an admissions decision. Only 4 percent of open-door colleges use a minimum cutoff score at all and, when they do, the cutoffs are set extremely low and often waived for older students, veterans, minorities and others deserving of special consideration.

Potentially, the most unsettling claim of the Nader-Nairn rreport is, "Although test scores do not correlate well with future performance, they are systematically related to the family income of the test-taker." This claim also misrepresents the evidence presented in the study that actually shows that test scores correlate with both future performance (freshman grades) and family income and to about the same degree. A table showing the relationship between students' average scores and their parents' mean income in 1973-74 is prominently featured in the report. On first glance it is frightening, for it seems to show that all one would have to do to determine an individual student's potential for college work would be to discover his parents' income.

However, if the careful reader makes his way to page 203, he will find, discretely buried in the text, that the connection is far from what the table implies, and that the statistical correlation between score and income is just about the same as the correlation between score and college grade-point average (.4 as compared with .37). Nevertheless, the report concludes that if the tests are valid, "then merit in the United States is distributed according to parental income." (No one, of course, has ever claimed that SAT scores measure "merit.")

What is the point of these attacks on standardized tests? Abandoning them could only force colleges to place heavier reliance on measures that are more subject to social and racial bias, such as the use of "feeder schools" that the admissions board knows well (generally private schools or public schools in wealthy communities), the personal interview or the old boy/old girl alumni network. Nader's personal bias is evident: he would like to see more reliance on "extracurricular activities and community organizing."

The real target seems to be not so much the tests themselves as the system of which they are a minor part: "Social class is viewed as a sad fact of life, but not as an issue," says Nairn. The controversy over testing makes class an issue." Where has he been? Did he miss the War on Poverty altogether? Somehow, it has come as a surprise to Nader and Nairn that being poor means being disadvantaged in more ways than having a low income.

The aptitude that can be measured after 12 or 13 years of schooling is not the native intelligence a student inherits in his genes. It is the product of 17 years of continual interaction between those capacities and the student's environment -- including everything from prenatal nutrition, to conditions in the home, to the quality of the school he attends. If the SATs are sending bad news, it is as much about the system that determines that total environment as it is about the individual student. The news is bad -- it does not show the progress that was hoped for. But that is all the more reason to keep hearing it.