William Raspberry, in denouncing the recent Maryland statewide writing proficiency test, which an excessively large number of 11th graders failed, seems to be off target on a number of points.

First of all, Raspberry's concept of "standardized test" is too sweeping. He erroneously associates a "standardized" writing test with ditto duplicates, fill-in, multiple-choice questions, and a machine-like method of grading. He also falsely assumes that any writing test not devised by the classroom teacher is "standardized" and, therefore, by definition, not workable.

By dragging in all the bad elements of so-called objective testing in reading, mathematics, etc., Raspberry succeeds in distorting the true nature of the Maryland writing test and others like it. Actually, it is probably a mistake to call these exercises "tests." Rather, they are writing samples. This simply means that the student is given a series of topics and is usually asked to write a paragraph or a short essay on any two of them. (I have administered and evaluated scores of these samples.) The samples are then passed around for grading by a number of English teachers of established competence and experience.

This brings us to the question of objectivity, a term that, again, Raspberry uses in too sweeping a manner. No one in his right mind would claim that writing competence can be objectively "meas the way that mathematics or chemistry can; but Raspberry is clearly wrong in assuming that the evaluation of statewide writing proficiency tests is so subjective as to negate their existence.

What Raspberry fails to take into consideration is that subjectivity becomes less of a factor at the lower level of competency. When one is grading samples of 11th grade expository writing, it is not too difficult to separate semiliteracy from literacy.

For example, there are about eight "gross" errors in grammar that are universally recognized as such. Any piece of writing that commits these errors to excess would have to be declared a failure. It is also not too difficult to detect incomplete sentences. Conversely, any student writing sample that is relatively free of these flaws should pass.

The only point on which Raspberry and I seem to agree concerns the numerical grading system employed in the Maryland test. In my view, this type of grading is inappropriate for determining mere proficiency. The test should have one cutoff point, below which a writing sample would be assessed as failing. Those falling in that unwelcome category would be denied a high school diploma until they demonstrated, on further tries at the test, a basic mastery of written English.

The terms "measuring" and "standardized," as they apply to evaluating student writing, do not help Raspberry's argument. Those of us who spend most of our waking hours judging student writing have "standards," but we deplore standardized methods of evaluation. Most English teachers, I am sure, have no difficulty in spelling out why "Hamlet" is superior to "Rambo" as a dramatic work of art; and, in like manner, they have no problem in separating an acceptable piece of student prose from an unacceptable one.

It is always a culture shock to learn that thousands of teen-agers have a substandard grasp of their native language, but the problem cannot be rationalized away by placing the blame on those who conceive, administer, and evaluate writing samples. Widespread semiliteracy among our young people is a national disgrace, and we need to get about the business of wiping it out.