wpostServer: http://css.washingtonpost.com/wpost

The Post Most: Local

Answer Sheet
Posted at 11:43 AM ET, 06/05/2011

5 reasons parents should oppose evaluating teachers on test scores

--

This was written by Carol C. Burris and Kevin Welner. Burris is the principal of South Side High School in Rockville Centre, N.Y. Welner is a professor of education policy and program evaluation in the School of Education at the University of Colorado at Boulder, and director of the National Education Policy Center. He can be reached at welner@colorado.edu.

By Carol Burris and Kevin Welner

The 1988 film “Stand and Deliver” portrayed Jaime Escalante’s inspirational teaching of AP Calculus to his East Los Angeles students. Escalante instilled ganas, the desire to succeed, in high school students, many of whom had never before known academic success. Viewers witnessed Escalante and his students teaming up against the test; it was important to them to show the world what they had done together.

Now imagine that Mr. Escalante’s appeal to his students had been, “I need you to get at least a ‘3’ on this exam because I don’t want to lose my job.” Would his students have responded with such dedication if they knew that Escalante’s motives were personal rather than selfless? Probably not. But how many teachers wouldn’t think in those terms if their jobs really did depend on students’ scores?

Evaluating teachers based on their students’ test scores is the newest education-policy fad. It has a gut-level appeal that’s usually articulated as “rewarding success.” The argument is that teachers (and principals) should be judged on their students’ test results, with good educators promoted and bad ones fired. In truth, lots of misguided educational policies have a gut-level appeal. Although the ‘gut’ may feel good when such policies are enacted, the unintended consequences do not feel quite as good, especially when they are felt by our students.

The story of Jaime Escalante illustrates the first of five reasons why we think parents should be concerned about educators being evaluated by test scores.

Reason #1: This use of student scores will damage the relationship between teachers and students (the ‘Escalante’ reason)

When teachers and principals are evaluated this way, children will be increasingly seen through the filter of their contribution to teacher scores. Bobby has a private tutor? Hooray: value added! Sally has a chronic illness that causes her to be absent? Uh oh: value decreased. In such a climate, the inspirational teacher-student relationship personified by Jaime Escalante will wither on the vine.

Parents might not be aware of how these incentives are playing out within schools, but teachers and principals are (and will be). We all want student achievement to improve but there is no evidence that financial incentives based on student scores somehow motivate teachers to teach ‘better.’ On the other hand there is much evidence showing that the easiest way to improve a school’s or teacher’s test scores is to shift students around. Enroll high-scoring students and push-out low-scoring students if you want to show you’re doing well – it’s a tried and true formula. So yes, the new systems will create incentives that change behavior; but manipulating student placements in classes to increase a teacher’s score is not a behavior we want to reward.

Reason #2: This use of student scores will diminish access to challenging classes for students when prepping for the test becomes the focus.

Back to Jaime Escalante. What excited a nation about his work was not that he taught AP Calculus, but that he taught AP Calculus to any student who wanted to take it. His success prompted schools across the nation to open the gates of challenge to students who had previously been shut out of AP, IB and honors classes because their prior scores were not high enough or they were viewed as not smart enough. National studies, most notably the 1999 study by researcher Cliff Adelman, demonstrated that taking rigorous courses in high school is one of the most accurate predictors of college success. He found that challenging courses trump test scores when predicting who eventually completes college.

When growth in scores becomes the goal, however, the score is where efforts will be focused. If the scores are course-specific, as is the case with New York State’s new system which allows locally determined measures as well as Regents exams to be used, then teachers are put in a bind. Think of the students like those in Escalante’s classes, who were not particularly advanced but were still willing to take on the challenge of AP and IB classes. Teachers who enthusiastically welcomed such students before the new policy now have good reason to fear that the students will lower their attributed (value-added) scores.

Reason #3: This use of student scores will cause many schools and classrooms that need good educators the most to lose them.

It is impossible for statisticians to isolate and measure all of the factors that result in the score that each student obtains. The June issue of the American Educational Research Journal includes a study finding that teaching a class that enrolled accelerated students resulted in higher teacher value-added scores when compared with the teachers of special education students, even though the model included controls for prior student achievement. In addition, teachers with poor teaching skills, as documented by trained observers, had good value-added test scores if they taught bright students. The authors’ conclusion was that this phenomenon would cause “disincentives for teachers to teach the lowest performing students” (Hill et al, 2011, p. 826).

Good principals put their best teachers with the kids who struggle. Will they continue to do that if they know it will hurt those top teachers’ evaluations? Which educators will want to teach in a school where teachers are routinely fired because there are many needy students with low scores? The best teachers will have even greater incentive to leave for the ‘safe schools’ in wealthy suburbs.

Reason #4: This use of student scores will promote teaching to tests at the expense of enriched, engaging learning.

The attaching of high-stakes to students’ test scores has already led to narrowed curriculum (squeezing out subjects other than those tested). As part of NCLB, schools were subjected to increasing sanctions if students’ scores failed to meet targets, and those pressures resulted in lots of unintended, negative consequences. Now, with educators themselves (rather than their schools) subject to high-stakes sanctions, teaching to the test and narrowed curriculum will become even more pervasive.

The other day we got a look at one of the standardized tests that will be used to measure growth in English language arts. One question was similar to this:

1.The inane comments of his brother, which John could not understand, did not help resolve the conflict between the two siblings.

In the sentence above, the word inane means:

a)angry

b)senseless

c)jealous

d)crazy

This question is designed to find out if the student is skilled in using context clues to figure out the meaning of a word. The fact that real writers rarely define the words that they use in the same sentence (or even in the same paragraph) is inconsequential to test creators. The skill is a Common Core standard and so it must be measured. But what do we really know about the skills of the student if she chooses answer 2?

She might be a good guesser. After all, there is a 25% chance of getting the right answer without even reading the passage. Or maybe she learned the test taking skill of “eliminate the distracter.” A well-trained test taker would not be fooled by the “inane looks like insane” trap. So maybe the guessing odds are increased to 33 percent.

Possibly, she already knew the meaning of the word from conversations at home or from reading. It may be that her teacher taught the class the meaning of the word the week before. Or maybe the student’s teacher prepped her students to use context clues.

In classrooms across America instruction will increasingly sound like, “When you see a question with a difficult word in a sentence, that is probably a context clue question, so be sure to look in the sentence for a synonym or a describing phrase. Sometimes they even use opposites to trick you…like this: “David was gregarious, not shy like his father.” Already we have heard teachers refer to ‘writing for the test’ as a “genre,” akin to narrative or poetry. Is that the best use of your child’s instructional time?

Reason #5: This use of student scores will siphon precious tax dollars from programs that benefit students.

By now it is clear from research that value-added measurement approaches are inadequate to isolate true teacher or principal effects. Such research will likely become central in the lawsuits filed after this policy is enacted. Meanwhile, millions of taxpayer dollars will be wasted on test creation, test grading, principal training, and lawsuits. Those millions will go into the pockets of private companies that develop and score tests, along with an army of consultants and lawyers. At a time when thousands of teachers are being laid off across the country due to cuts to education, is this really how we should spend our precious tax dollars?

If there were solid evidence that evaluating educators based on student test scores would truly improve the education of America’s students, we would give it our wholehearted endorsement. Nothing is more important to us professionally than the well being of our nation’s public school students. But as researchers and parents we know that this inane policy will not benefit those we care about the most. There is no “value added” for our students.

-0-

Follow The Answer Sheet every day by bookmarking http://www.washingtonpost.com/blogs/answer-sheet. And for admissions advice, college news and links to campus papers, please check out our Higher Education page. Bookmark it!

By  |  11:43 AM ET, 06/05/2011

 
Read what others are saying
     

    © 2011 The Washington Post Company