District public school students who have taken the same standardized tests in reading and mathematics since 1975 will take them again in May after school officials delayed picking a new exam because of concern that it would yield lower scores.

On the old tests, most students scored above the national norms, and the school system has shown substantial improvement for the past seven years. The sample nationwide testing on which the test norms are based was conducted in 1972-73 with questions that are still being used.

"The fact is there will be a dip in test scores if we use a test that has been normed more recently than the ones we currently use," said David L. Huie, the school system's director of quality assurance. "We've decided to give the old test again this year . . . . We have to be very careful that there is a full program of awareness before we introduce a new test."

Huie added that the school system would need time to change its curriculum to match a new test "so it is a fair examination of our children and their skills." He said he hoped that a new test will be selected this spring and used in May 1987.

"We will have a drop, but there is no reason to have an unnecessary drop," Associate Superintendent James T. Guines told a school board committee on Tuesday. "There are some new things that are stressed in the new tests that were not stressed in the old tests . . . . There's more on comprehension and applications, the higher level reasoning skills where we have not done as well as we would like. So we'll have to make sure the curriculum has the same emphases that the new tests have."

Around the country, standardized tests usually are changed every six to eight years, several test experts said, although two big cities -- Cleveland and San Francisco -- are still using the same 1974 version of the Comprehensive Test of Basic Skills (CTBS) as the District.

Last spring, several thousand D.C. students were given three newer standardized tests as part of a research project designed to help choose one for systemwide use this May. Officials said most of the scores were substantially lower than those reported for the CTBS. But Huie said the results would not be released until Superintendent Floretta McKenzie recommends to the school board which test should be selected.

The recommendation was originally scheduled for last October so that a new test could be used this spring. Last week, Edna Frazier-Cromwell, chairman of the school board's research and evaluation committee, asked McKenzie to present her recommendation by Feb. 27.

A report prepared by administrators for the committee said five new tests were given to groups of students throughout the city in May 1983. But Janis Cromer, a spokesman for McKenzie, said the superintendent had also decided not to make the results public until she presents her recommendation. Officials said these scores, too, were generally low.

"It's hard for a lot of places to change their test. So much is riding on it," said Gerald Bracey, a testing expert for the Virginia Department of Education, which last changed its statewide test in 1981 to the Science Research Associates (SRA) exam. "After a while people learn what's on it, and they teach to the test. If you keep a test long enough almost everybody is going to get above grade level. You have to make a change."

Bracey said that when Virginia last changed its standardized tests, reported scores went down in the fourth grade because of higher average performance nationwide, while scores rose in the eighth and 11th grades because the national averages in those grades had dropped during the 1970s. Maryland reported a similar pattern when it switched to the California Achievement Test (CAT) in 1981.

More recently, however, publishers have reported slight gains nationwide in the high school grades after a long slide, which are also reflected on the Scholastic Aptitude Test (SAT) for college-bound high school seniors. As a result, when newer standardized tests are used, a higher level of performance is needed to reach their national norms. Unlike the SAT, the standardized tests compare students around the country at a particular time rather than providing a fixed measure of achievement.

"If the scores go down, it doesn't mean there's been any drop," said Eva L. Baker, director of the Center for Student Testing, Evaluation and Standards at the University of California at Los Angeles, which is funded by the U.S. Department of Education. "It just means the yardstick has changed. The problem is that people are so suspicious that they think there must be a problem."

However, Baker and several other experts said major difficulties in interpreting standardized test results have been caused by the practice of many school districts, including D.C., of focusing large parts of their curriculum on the skills featured in the exams.

In recent years, according to an article published by the National School Boards Association, nearly all big city school systems, including New York, Los Angeles, Baltimore and Atlanta as well as Washington, have raised their test scores above the national norms by using a similar focused curriculum.

However, when publishers do sample testing for the norms, they base questions on a survey of what is taught around the country and deliberately do not tell the schools involved what particular topics they will cover. This deflects criticism that the tests try to "dictate" curriculum and is intended to show what testing officials call the "natural" range of achievement nationwide.

"The kids that take the tests for norming aren't prepped," said Donald Sale, testing supervisor for the Virginia Department of Education. "But when you do that for your kids you're giving them an edge . . . . You've pumped up some areas, and that's not representative of what children know . . . . But it's become such a universal practice now that almost everybody is getting pumped-up scores. So if you don't teach to the test it may make you look weak."

Paul L. Williams, director of research and measurement services for CTB/McGraw Hill, the firm that publishes both CTBS and CAT, said the widespread "targeting" of the curriculum to the tests has reduced their usefulness "as an indicator of relative position nationwide," but he added: "You can look at the questions and see that they're testing things which children ought to know. When the scores are going up it means that the kids are learning, and that's something good . . . . People will get used to the new norms."