Parents and students may not know this, but all of the questions kids have to answer on standardized tests don’t actually count. Why? Here to explain is Fred Smith, a testing specialist and consultant, retired as an administrative analyst for the New York City public schools. He is a consultant on testing, educational research and other statistics related to city government, and a member of Change the Stakes, a parent advocacy group. This appeared on the SchoolBook website. In this piece, Smith is talking about the new standardized tests that New York students are now taking which are aligned with the Common Core State Standards and that officials warned will be harder than the old standardized tests used for accountability purposes.
By Fred Smith
Embedded in the new English Language Arts exams for New York students are field test questions. They do not count toward the test score. They are being tried out so the publisher can see how the items work and decide which ones to use next year.
I wonder if parents should have the right to give or deny permission for their children to participate in what is essentially research for Pearson, the for-profit test publisher?
Let me explain more how the field tests work. On last Tuesday’s third-grade test there were 30 items, five reading passages followed by multiple-choice questions. Children had 70 minutes to complete the test. They don’t know which items counted and which did not. The test publisher didn’t say how many of the 30 were field-test items.
A reasonable assumption is that one 500- to 600-word passage was being tried out along with a set of multiple-choice items. If everything was evenly balanced, that would mean six items were being tried out with this passage — and would take about 14 minutes to complete or one-fifth of the allotted testing time.
The publisher, Pearson, is using four forms of the test to try out items. The four forms consist of the same operational questions, plus one of four sets of field-test items. So Book 1 had 30 questions, some that counted and some that didn’t. The forms are designated A, B, C and D. The New York State Education Department said the field-test material will be interspersed with the operational.
Here is how this arrangement of the 30 items in Book 1 might look, based on the third-grade example: (OP = operational passage; FTP = field test passage).
Form A: OP- 1 – 6 / OP- 7 – 12 / OP- 13 – 18 / OP- 19 – 24 / FTP- 25 – 30
Form B: OP- 1 – 6 / OP- 7 – 12 / OP- 13 – 18 / FTP- 19 – 24 / OP- 25 – 30
Form C: OP- 1 – 6 / FTP- 7 – 12 / OP- 13 – 18 / OP- 19 – 24 / OP- 25 – 30
Form D: FTP- 1 – 6 / OP- 7 – 12 / OP- 13 – 18 / OP- 19 – 24 / OP- 25 – 30
Since students don’t know which passages and questions will count, they will be expending energy on parts of the test that will probably wear them out and frustrate them, because the state has called for tougher, more challenging items. The extra effort will have a bearing on how well they do on the test. It is likely they would do better if the test was limited to the four passages and 24 questions that count toward the final score.
Furthermore, since one-quarter of the students get one of the four forms, then the children who get Form D or Form C will run into the field-test items (by definition less refined than the other items) near the start of the test and have a greater chance of being sapped and defeated before they get to the end of the test compared to children taking Form A who will have a better chance of completing all the items that count.
This has implications in New York City, where the Department of Education makes extrapolations from test scores to student promotions, teacher evaluations and school ratings.
There is another kind of field testing — known as stand-alone field testing — coming to our students soon. This is where the field test items are contained in separate booklets and results don’t count except for the publisher; the sole purpose is to try out more items for possible use. This will occur in the first week of June.
There are three problems about field testing children in June. First, they know the tests don’t count. Second, it’s June.
Third, after a test-heavy school year, it is unlikely they will give their best effort and perform optimally. So, the results derived from the stand-alone field tests will be a misleading basis for selecting items for next year’s exams.
But there’s also an ethical and legal question underlying all this. Trying out embedded and stand-alone field test items is in the nature of an experiment, serving the commercial interests of the publisher, and making children the subjects of research.
Shouldn’t the permission of parents be sought to allow their children to participate in such activity?
In June most schools will be told to administer field tests. Shouldn’t parents know in advance the field tests are coming?
If they knew, parents might want to opt their children out of them rather than sacrifice more classroom time to testing.