FIVE YEARS ago the College Entrance Examination Board announced that scores had been falling on its Scholastic Aptitude Test. Since then we have had seemingly endless pronouncements about declines on other exams, ranging from those of the National Assessment of Education Progress to the American College Testing battery.

A series of misinterpretations has even encouraged many people to believe that schools are turning out huge numbers of "functional illiterates." Alarmed traditionalists have been attacking educators for being preoccupied with "frills" and are insisting that schools get "back to basics." In Washington, proposals have been advanced to create new national education standards and otherwise involve the federal government more deeply in local classrooms.

But today's students are not doing worse on what educators and parents traditionally have thought of as the "basics" - the three Rs. Where problems appear, they are with more complex skills, with the students' desire or ability to reason, with lack of interest in ideas, with a shortage of information about the world around them. If schools need to do anything today - and it is doubtful that schools alone can solve the problem - it is to get back to complexity, not to basics.

Elementary school children, for example, have shown continued progress in their ability to read and recall what they've read, to punctuate and spell when they write, and to do basic arithmetic. The National Assessment reports that the 9-year-olds it examines about every four years improved in both reading and writing between 1970 and 1974. McGraw-Hill's widely used Comprehensive Test of Basic Skills shows similar gains in reading and math for 2nd, 3rd and 4th graders. Iowa's state-wide testing program, the most comprehensive of any state's, also reveals continued progress in the primary grades.

While there are still plenty of pupils who do badly and need special attention, test results suggest that there are proportionately fewer of them in today's primary schools than at any time in the past. Where the Problem Is

THE TROUBLE is at the secondary school level - and, again, not with "basics." The federally funded national Assessment shows, for example, that 17-year-olds did better in 1974 than in 1971 on its test of "basic literacy," and they did no better or worse on "literal comprehension" of more complicated reading matter. While several "functional literacy" surveys claim to sow that high school graduates cannot read such simple material as classified ads and product labels, these conclusions are extremely questionable. A recent analysis by University of Michigan psychologist Donald L. Fraser shows, for example, that a large fraction of these so-called "illiterates" hold professional and managerial jobs. This suggests that the tests are not really measuring "functional illiteracy," as that term is normally understood. Some of these "illiterates" probably are just people who don't pay much attention when testers ask them boring questions.

The problem lies elsewhere. Where today's 17-year-olds run into difficulty, the National Assessment shows, is in making inferences from what they read. They know what a passage says - but they do not understand the author's point as well as their recent predecessors did. When they write they make no more punctuation or spelling mistakes - but they write less coherent paragraphs than 17-year-olds did a few years back.

They also have less information. High school graduates, various tests indicate, have not read as widely as their recent counterparts, or at least they do not seem to have retained as many facts about literature, history, contemporary society or scientific subjects. For those who commonly contend that "memorizing facts" is not the prime aim of the schools, it should be added that these students also appear less adept at using reference works to look up what they do not know. Nor do they seem to think as carefully about problems testers set for them, even when solving the problems requires no special information.

The magnitude of these declines shouldn't be exaggerated. Test scores have risen substantially since the 1920s, when standardized testing first became widespread. Achievement test declines among high school seniors did not begin until the mid-to-late 1960s, and they certainly have not eliminated the earlier gains. It is safe to say that today's high school students still know more on the average than their parents did at the same age.

But the drops on tests of more complex thought and information are not trivial.They could begin to have serious consequences if the continue, and it is important to try to understand why they are occurring. Innovators Versus Traditionalists

TRADITIONALISTS, of course, blame the declines on school innovations introduced in the 1960s, while those responsible for the innovations, or sympathetic with them, mostly blame the tests themselves. The exams, they frequently argue, do not reflect much of what the schools are teaching. That is precisely what worries the traditionalists. They accept the tests as measuring important things that the schools should be teaching. In short, the clash is as much over what is in the test as it is over scores.

There are some things to be said for the innovators' view. It cannot explain or justify the test-score declines, and it should not lessen concern about them.But there is much misunderstanding about what tests measure.

A great deal of public attention, for example, has fastened on the drop in median levels on the Scholoastic Aptitude Test given to college applicants. In 1976-77, only 44 per cent of those taking the SAT reached the median level of the 1969-70 group on the math questions, and only 39 per cent reached it on the verbal portion. But, contrary to popular impressions, SAT scores are not very useful in judging whether high schools are doing their job.

This is simply because the SAT measures many things that schools have never tried to teach. The SAT contains four kinds of verbal questions - antonyms, analogies, sentence completion and reading comprehension - as shown in the sample questions on Page C4. Most of the math questions presuppose a knowledge of simple arithmetic, elementary algebra or elementary geometry. But schools, for example, do not spend much time teaching students to do verbal analogies (Question 2) or to solve puzzles (Question 6), and they did not do so when SAT scores were rising either.

The SAT, as its name states, is designed to measure "aptitude," not achievement, and schools do not teach aptitude.

Those who stress the SAT cite the relationship between SAT scores and later success in college and life. College freshmen who scored above average for their incoming class on the SAT, for example, are two to three times more likely to ear n above-average freshman grades than freshmen whose SAT scores were below average. Aptitude test scores also show a close relationship with the amount of schooling students ultimately get, and at least a moderate relationship with eventual earnings.

But statistics of this kind do not prove that the ability to answer SAT questions causes students to do well in college or earn a lot of money. It could easily turn out, for exmaple, that the best way to measure aptitude is to see which students know the most about whatever subjects interest them. A generation ago, testers might have found that one good way to measure boys' aptitude was to discover how much a student had learned and could remember about professional baseball. Students who knew the most about baseball in high school might well earn above-average college grades and above-average incomes, simply because those who learned and remember a lot about baseball also earned and remembered a lot about other things. But if the average student's knowledge of major league baseball began to decline, it would be ridiculous to conclude that students were going to have more trouble earning a living. It would be far more plausible to suppose that interest in baseball had declined, perhaps because more students were watching football or even reading poetry.

A decline in the average student's score on such an aptitude test would not necessarily foreshadow a decline in either college performance or earnings. Nor does a decline in scores on the SAT or similar tests necessarily mean that students are less capable of doing college work or earning a living. It may simply mean they are not spending as much time and energy on the things the SAT measures. That is nothing to cheer about. But neither is it a very useful way to judge the effectiveness of high schools. Problems of Achievement Tests

THE ONLY sensible way to judge the significance of test-score declines is to look at the tests, item by item, and decide whether ability to answer that kind of item is really important. If the aim is to judge high schools, the tests to examine are those measuring achievement, since they are supposed to ask questions about what schools do teach.

Unfortunately, though, matters are not that simple either. Many achievement tests may be leaving out things which educators consider important to teach their students. Test makers, for example, usually weed out items which have what they term undesirable "statistical properties." This simply means that if the "wrong" students do well on an item, the test makers drop it.

Test makers almost always assume that those who know more about one aspect of a subject know more about other aspects as well. If students who answer a question correctly do not score above average on most other questions, the test maker assumes that the offending question does not "really" measure knowledge of the subject, and it disappears. If a student's chances of knowing the answer to a given question do not improve with the age, the test makers also drop it.

These "rules" make sense for aptitude tests, which are supposed to predict future success. They make no sense for achievement tests, which are supposed to measure current accomplishments.

Until most achievement tests are made public, there is no way of knowing how well they measure the things we think important. At present, few school systems seriously analyze the tests they use. Teachers who have seen the tests complain that they do not measure things they want students to know, but this is widely attributed to teachers' defensiveness. This situation will persist so long as schools continue to use tests whose questions are not available for public scrutiny.

To assume, as many school systems do, that testers know what they are doing is naive. Most tests makers are profit-making companies controlled by book publishers. They will sell virtually anything the public is foolish enough to buy. They will also keep their tests secret as long as they can. It saves them the trouble and expense of writing new questions every year and discourages substantive criticisms of their products. It also discourages serious public debate about what schools should teach.

Despite all these shortcomings, however, tests cannot simply be dismissed, and particularly not the National Assessment, which is an honorable exception to this pernicious pattern. It consults with a wide array of professional and public groups in designing its questions, and it makes the questions public once they serve their purpose. While National Assessment tests are not perfect, they are the closest thing we have to a concensus about what young people should know.

But the National Assessment yields essentially the same picture of declining ability to handle complex thought and information among 13-year-olds and 17-year-olds as do other standardized achievement tests. It reports that 17-year-olds as a group knew less about the natural sciences, wrote less coherent essays, made less accurate inferences from what they read, and were less adept at using references works in 1973-74 than in 1969-70.

If this picture were merely spotty, with gains in some areas and losses in others, there would be room for considerable debate about whether the gains were more important than the losses. But when declines occur in every area requiring complex thought and information, and when they recur on tests designed by many organizations for diverse purposes, it is hard to avoid the conclusion that secondary school students are learning less than their recent predecessors in most areas that the schools themselves have emphasized. Criticisms From the NEA

NOT EVERYONE accepts this conclusion. The National Education Association, the largest teachers' organization and a powerful critic of standardized tests, recently issued a report suggesting that the tests do not adequately measure even the skills schools traditionally have taught. For one thing, the NEA argued, tests don't measure students' ability to "analyze, synthesize, draw generalizations, and make applications to new phenomena."

But a look at the SAT questions below suggests that these charges are somewhat exaggerated. A student could not, for example, pick the right words to complete Question 3 without analyzing the logic of the sentence to see which words "make sense." Most students probably do this unconsciously, but that is true of almost all analytic tasks. Question 6 also invites logical analysis, and it asks students to apply their logic to a problem quite unlike those they ordinarily encounter in school. (To answer Question 6 quickly, the student must see that the number of cars leaving via Exit Y is equal to the total number of cars minus the number leaving via Exit X.)

The NEA is right in charging that neither the SAT nor other multiple-choice tests measure students' ability to synthesize diverse information or to generalize from it. Schools and colleges ordinarily try to develop and measure these abilities by having students essays. If today's high schools were putting more emphasis on writing, and if students were producing more coherent essays as a result, many people would feel this gain more than offset a decline on multiple-choice tests. But neither the NEA nor anyone else claims this is happening. If secondary school students are writing more, it is a well kept secret.

The NEA, like many other groups, has also attacked standardized tests for being "biased in favor of white middle-class culture and values." Certainly the verbal portions of these tests stress vocabulary drawn largely from books, not from colloquial black or lower-class speech.This reflects America's "official" culture, which is the culture of the middle and upper classes.

If educators no longer believed this, if they were stressing black or lower-class speech patterns in high school, that might explain why scores on traditional verbal questions have declined. But it is hard to accept such an explanation. Iowa, for example, is 98 per cent white and largely middle class in values and outlook. Yet its state-wide testing program shows steady declines at the secondary school level in vocabulary, as well as in ability to express oneself and in knowledge of literature, social studies and the natural sciences, between 1966 and 1974. This obviously has nothing to do with the argot of Harlem or of poverty cultures.

What has almost certainly happened to students' vocabularies over the past decade, in Iowa and elsewhere, is that they have contracted. It is hard to see how this is a good thing.

The NEA does not stop there. It suggests that standardized tests are biased not just in content but in "values." There are a few tests in which the testers' values affect decisions about what constitutes a "correct" answer. This is true, for example, of some items on IQ tests. But during the 1960s, social critics sometimes argued that all tests were biased simply because the middle classes valued reasoning skills and information more than the lower classes did.

Taken to its logical conclusion, this view suggests that if lower-class students are less interested in algebra than middle-class students are, schools should stop teaching algebra. Likewise, if white students enjoy reading more than black students, the schools should abandon books. This is absurd. Adults create schools to do more than just provide whatever skills students think they need. Schools are supposed to teach students skills they never thought about needing. This means schools must help shape students' values. There is, of course, plenty of room for controversy about what values schools ought to promote. But it is naive to suppose that they can or should be neutral, or that the values of one social or economic class are as good as those of another. Questions of values cannot be resolved by mindless egalitarianism.

Finally, the NEA complains that standardized tests stress "complex language and obscure vocabulary." It would be nice if everything worth saying could be expressed in the vocabulary of the average 9th grader. But anyone who has tried to write lucid prose about a complex subject knows this is not easy. Moreover, anyone who has tried to read his income tax forms or a report evaluating his local school system knows that many authors evaluating his local school system knows that many authors turn out documents that are extremely hard to follow. Those who cannot decipher such communications are at the mercy of those who can, and the situation is getting worse, not better.

One can argue, of course, that offical agencies should eliminate unnecessary complexity and obscurity. But some prose is intrinsically complex or ambiguous. Shakespeare, for example, is regarded as having had a larger active vocabulary than any English dramatist before or since; his range of language is an essential part of his power. The Bible is also filled with "complex language." This is no accident. Prose is often influential because it is complex. If the NEA no longer thinks it worthwhile to teach students how to understand such prose, this may help explain why students are less capable of doing so. Changes in the Schools

IF THE TESTS are not to blame, what is? Many explanations have been offered.

When the College Board first announced that SAT scores were dropping, many people assumed this chiefly reflected the fact that more low-scoring students were completing high school and applying to college. Since this had been a major aim of social policy, the news was neither surprising nor alarming. But this turned out not to be the whole story.

Last summer a panel appointed by the College Board concluded that changes in college attendance patterns had indeed been responsible for most of the SAT decline between 1963 and 1969. But the proportion of low-scoring students finishing high school and entering college had reached a plateau around 1969, while SAT scores had continued to decline. The panel concluded that there had been two declines, the first caused by changes in college recruitment patterns, the second by a general decline in all high school students' skills at what the SAT measures. This second decline seems to have begun in the late 1960s, though it did not become large enough to arouse much interest until the early 1970s. It is this second decline that requires explanation.

Although America spends hundreds of millions of dollars a year testing students, we still have no comprehensive system for monitoring the results from year to year. This makes it difficult to say when the high school classes of 1970-77 first began to lag behind their predecessors. They do not seem to have had any special trouble mastering the basic skills taught in elementary school. But the class of '70 evidently was not doing as well on conventional achievement tests when it entered high school in 1966. More recent classes seem to have started falling behind at lower grades, and have been further behind when they graduated. This suggests that either the schools, the society, or both changed in some way in the mid-1960s.

We do not know much about how schools have changed since the 1960s. They certainly have more financial and human resources than even before. Even after allowing for inflation, public elementary and secondary schools were spending 50 per cent more per pupil in 1976 than in 1966. There are more teachers per pupil, and the teachers have had more formal education than ever before. Unfortunately, we do not know how, if at all, these changes affect what actually happens in the schools. DESEGREGATION.

One popular explanation for declining test scores is that desegregation forced many previously all-white schools to lower their academic standards to accommodate nonwhite students with fewer academic skills. But desegregation has been confined largely to the South and to a few big Northern cities. Test scores, in contrast, have dropped throughout the nation. In many cases the decline is even greater in the North than in the South. Both Iowa and minnesota are more than 98 per cent white, for example, yet they report marked declines in high school scores. The trouble cannot be blamed on blacks. ELECTIVES.

Schools have changed in other ways. High schools, for example, offer more elective courses today than in the past. This could lead either to increases or decreases in skills, depending on the electives deal with such "non-academic" subjects as drama, film and science fiction. But other new electives provide college-level academic work in literature, history and calculus. A 1975 investigation by Annegret Harnischfeger and David Wiley, sponsored by the government's National Institute of Education, found that secondary school students took more academic courses in 1970-71 than they had in 1960-61 or 1948-49. They found a small decline in academic courses between 1970-71 and 1972-73, but it is not clear whether this trend has continued.

Even if students were taking more academic subjects, some argue that the content has been watered down. But a recent study of textbooks by Harvard's Jeanne Chall found no clear evidence that high school texts had become easier to read. (Elementary school texts were clearly more difficult than they had been a decade ago.) The main conclusion to be drawn from Chall's work is that high school texts have always been written well below the level most students could understand. This "lowest common denominator" approach may make sense in terms of communicating subject matter. But it means that textbooks probably have never done much to enhance most students' abilities in reading or writing. HOMEWORK.

Nobody seems to have investigated how much homework students are doing today compared to a decade ago. More important, nobody seems to have compared the kinds of homework today's students are doing to what their predecessors did. We do not know whether high school students are reading more or fewer pages per week, writing more or fewer essays, doing more or fewer algebra problems. Traditionalists, of course, are convinced thaa students are doing less in every area. They may be right, but it would be nice to have some evidence. LOWER EXPECTATIONS.

Traditionalists are also convinced that teachers today expect less of their students, often citing grade inflation as evidence. There is not much question that the average high school gives higher grades today than it did a decade ago. Since high school test scores are falling, it is fair to assume that it is easier to earn any given grade today than it was in the past.

It is not clear, though, how raising the averge grade affects students' motivation to learn. It probably takes some time for grades to lost their traditional connotations. A student who gets a B -, for example, may feel he is doing pretty well, even though he knows that most of his classmates are doing even better. But it is not obvious that medipraise. They may well do more.

In practical terms, grades are mainly important for getting into college. Most colleges either admit all applicants or select those they judge most promising. Selective colleges are seldom concerned with applicants' actual grades. The look at an applicant's rank relative to others in the class. Most students who want to attend a selective college realize this, so for them grade inflation changes little.

More important than general grade inflation may be changes in the traditional criteria for awarding top grades. If, for example, schools were placing less emphasis on writing mechanics and more emphasis on creativity, this might affect students' motivation to master the mechanics of writing. Unfortunately, again, we know very little about these matters. The College Board panel found that the relationship between high school grades and SAT scores had actually risen in recent years. At first glance this may seem surprising, but on closer scrutiny it is precisely what we would expect if high schools were placing less emphasis on routine academic work.

Grades in traditional subjects usually depend on two factors: ability and effort. One standard complaint about high schools during the 1960s was that they demanded too much memorization and assigned too much "make-work." But under those circumstances, slow learners could do relatively well simply by working hard. Such students did not, however, do very well on the SAT, which rewards mental gymnastics. In a school that demands who goof off most of the time, or think the teachers stupid, seldom end up at the top of their class in such schools.

Many of the reforms of the 1960s were aimed at eliminating drudgery from the curriculum. In practice, the curriculum was often redesigned for the student who talked a good game, wrote a flashy paper, and could "see" the solution to a math problem rather than having to learn it step by step. In those circumstances, "smarts" count more and diligence less. The rising correlation between SAT scores and high school grades may therefore mean that schools are requiring less work of their students and thus providing fewer opportunities for the "plodders" to shine. Since scores on almost all standardized tests depend partly on how much miscellaneous academic information the student has been exposed to before the test, a decline in the workload could conceivably contribute to a decline in the average student's SAT score, as well as to declines on other tests.

Even if students were doing less academic work in secondary school today, we would have to ask why. Educational conservatives discuss the curriculum as if administrators and teachers exerted total control over what happened in "their" schools. But students are not infinitely plastic. If schools try to impose requirements that students regard as illegitimate or silly, students have innumerable ways of defeating the effort. Adolescents outnumber adults by about 20 to 1 in most high schools. If adults want the school to run smoothly - indeed, if they want it to run at all - they must make accommodations to students' prejudices.

Conservative critics seem to assume that schools introduced cources on film or science fiction because teachers personally thought these subjects more interesting than Shakespeare or Dickens. But it is equally likely that schools introduced them because they found that students were less and less interested in Shakespeare and Dickens. When the natives become restless, a colonial administration often tries bread and circuses. It is easy for newspaper pundits to deplore these ploys. It is much harder to devise workable alternatives. Changes in the Society

OTHERS LOOK TO changes in the society to explain the test-score drop. The currently fashionable explanations in this area fall into three general classes: demographic, economic and cultureal. The demographic and economic explanations do not, however, fit the known facts. WORKING MOTHERS.

Today's students are, of course, more likely to have working mothers, and some people assume this means that they get less stimulation at home. But this theory has a fatal flaw: Children of working mothers do not score lower on standardized tests than children of non-working mothers if their mothers are similar in other respects. SBSENT FATHERS.

Today's students are also less likely to have fathers at home than their immediate predecessors were. Other things being equal, children living in single-parent families seem to score slightly lower on standardized tests than children in two-parent families. But the difference is smalll, and the proportion of all students living in single-parent families is also small, though rising. While changes in family structure may have played a slight role in declinging test performance, most of the decline must be due to other factors. BABY BOOM.

Another theory holds that test scores declined because of the baby boom that began after World War II. Children from large families generally have lower test scores than those from small families, even when they parents are alike in other respects. Children born close together also do slightly worse than children born farther apart. Since families got larger and children were born closer together after World War II, high school seniors' scores should have begun to fall after about 1962. Furthermore, many families had their first child right after World War II. Since eldest children tend to score slightly higher than younger children in the same family, one might expect a further decline in test scores after 1963 or 1964, as the proportion of younger children began to rise.

But if this theory were correct, elementary school scores should have begun falling in the 1950s. Actually, they rose. Furthermore, changes in family size, birth order and child spacing were not nearly large enough to explain the decline in high school performance, especially after 1970. ECONOMICS.

The economic benefits of higher education have declined slightly since 1970. In 1969, male college graduates between the ages of 25 and 64 earned 50 per cent more than high school graduates of the same age. The earnings differential had narrowed to 40 per cent by 1975, and the decline for men between 25 and 34 was even sharper.

Some economists believe that high school students have decided that higher education is no longer important and are thus doing less academic work. This could easily lower scores on many tests. But it would be astonishing if appreciable numbers of high school students were aware of the change before about 1973. Since a high school senior's test performance is the cumulative result of his previous efforts and experience, a change in motivation that took place in 1973 would not have an appreciable effect on SAT scores until several years later. The decline in scores between 1969 and 1975 must therefore have had other sources. TELEVISION.

A large fraction of the public attributes test-score drops to cultural factors such as television. Today's high school seniors usually have spent more hours in front of a TV set than in classrooms or at homework. TV writers seem to assume that the average American has a mental age of about 12, so one might expect the introduction of TV to broaden the horizons of children under 12. Since TV competes with homework and non-academic reading, it could narrow the horizons and limit the academic development of older children. The result would be a rise in test scores among elementary school students and a decline among secondary school students. This is exactly what has happened since the mid-1960s.

But television was almost universally available by the early 1950s. If it really rots high school students' minds, why did the symptoms not appear until the late 1960s? It can be argued, of course, that high school students watched far less TV in the 1950s than in more recent years, but they still watched quite a lot, and if TV lowers scores, it should have had an effect even then.

The best way to salvage the TV hypothesis may be to argue that television itself changed during the 1960s. One could argue, for example, that in the 1950s TV offered programs clearly labeled as "entertainment." This entertainment was certainly mindless, but it was not designed to persuade viewers that life as a whole was mindless. Television today offers the viewer a more complete vision of the world. It is not all as bland or myopic as what appeared 20 years ago, but it still has no room for consecutive thought. Programming directors are too worried about losing restless viewers to include ideas that require more than a few seonds' sustained attention. One would not expect children raised on such a diet to be interested in complex, sustained thought. Nor would you expect them to have much patience with school work, which is seldom entertaining.

All this, however, is conjectural. Nobody has done systematic research on how different sorts of television affect students' development at different ages. Nobody has compared the test-score trend in the United States to that in other countries where TV has a very different character. Until something of this kind has been done, the TV hypothesis should be treated cautiously. PERMISSIVENESS.

Another popular explanation of declining scores is "permissiveness." This is not just a matter of students thumbing their noses at grownups and getting away with it. Adults are less certain that they have a right to make decisions for or about the young. Teachers are more sensitive to student opinion and more worried if students are bored. Doubts about the legitimacy of traditional standards have made adults less willing to coerce students into doing what adults think they ought to do. Schools are less eager to punish students who miss school or class, fail to do homework, or do it late.

Declining faith in adult authority need not, of course, imply declining interest in ideas or in doing academic work. The life of the mind requires that one have a certain disrespect for those in authority, that one remain loyal to one's own notion of rationality even when authorities do not share it. Trouble arises, however, when people lose respect for external authority without developing any internalized standards of their own. This seems to have happened in many schools in the 1960s.

Students discovered that the vision of the world in their textbooks was seriously misleading. It did not, for example, square with the American role in Southeast Asia or, later, with Richard Nixon's way of managing American politics. For those already committed to the hegemony of reason, these discoveries were merely an incentive to work out better explanations of events. But to students with no prior commitments, these same discoveries often implied that rationality itself was just another illusion. There seemed to be so many competing interpretations of reality that it was hard to defend one to the exclusion of others. This led not only students but many teachers into the kind of spongy cultural relativism that treats all ideas as equally defensible.

But if all ideas are equally defensible, none is worth bothering with. For students who would rather be surfing or watching TV crime dramas, the life of the mind may well appear pointless. If you can "prove anything with statistics," why learn to use them? Indeed, why even learn to add?

Today's schools ought to be trying to restore respect for the value of reason, in all its complexity. There is, however, no sign that this is happening. The NEA is busy complaining about "complex language and obscure vocabulary" and trying to prove that everything is really fine. While television could make an enormous contribution to changing people's attitudes about these matters, there is little chance that a profit-hungry industry will do so. Programs which try to "improve" viewers, either overtly or covertly, cannot possibly draw audiences as large as "Bionic Woman," which panders to the child in us all.

The "back to basics" movement seems mainly concerned with restoring respect for persons in authority, like parents and teachers, and not for ideas. It wants to put more emphasis on basic skills, which are not deteriorating, rather than on the complex skills which are. Indeed, the drill work of the back-to-basics movement seems to be a prescription for discouraging interest in complex ideas.