VAMboozled is a blog about teacher evaluation, accountability and value-added models written by Audrey Amrein-Beardsley, associate professor at Mary Lou Fulton Teachers College at Arizona State University. The following post was on her blog, from a teacher in Arizona who is not identified. The teacher reveals the idiocy of the “value-added” method of evaluation teachers, involving the use of student standardized test scores as a key measure. This teacher raises a question that teachers in other states have also confronted: She will be graded on test scores of students she didn’t teach. As  Amrein-Beardsley notes the story told by the teacher “is becoming a too familiar story.”

From the teacher:

Initially, the focus of this note was going to be my 6-year long experience with a seemingly ever-changing educational system. I was going to list, with some detail, all the changes that I have seen in my brief time as a K-6 educator, the end-user of educational policy and budget cuts. Changes like (in no significant order):

Math standards (2008?)
Common Core implementation and associated instructional shifts (2010?)
State accountability system (2012?)
State requirements related to ELD classrooms (2009?)
Teacher evaluation system (to include a new formula of classroom observation instrument and value-added measures) (2012-2014)
State laws governing teacher evaluation/performance, labeling and contracts (2010?)

have happened in a span of, not much more than, three years. And all these changes have happened against a backdrop of budget cuts severe enough to, in my school district, render librarians, counselors, and data coordinators extinct. In this note, I was going to ask, rhetorically: “What other field or industry has seen this much change this quickly and why?” or “How can any field or industry absorb this much change effectively?”

But then I had a flash of focus just yesterday during a meeting with my school administrators, and I knew immediately the simple message I wanted to relay about the interaction of high-stakes policies and the real world of a school.

At my school, we have entered what is known as “crunch time”—the three-month long period leading up to state testing. The purpose of the meeting was to roll out a plan, commonly used by my school district, to significantly increase test scores in math via a strategy of leveled grouping. The plan dictates that my homeroom students will be assigned to groups based on benchmark testing data and will then be sent out of my homeroom to other teachers for math instruction for the next three months. In effect, I will be teaching someone else’s students, and another teacher will be teaching my students.

But, wearisomely, sometime after this school year, a formula will be applied to my homeroom students’ state test scores in order to determine close to 50% of my performance. And then another formula (to include classroom observations) will be applied to convert this performance into a label (ineffective, developing, effective, highly effective) that is then reported to the state. And so my question now is (not rhetorically!), “Whose performance is really being measured by this formula—mine or the teachers who taught my students math for three months of the school year?” At best, professional reputations are at stake–at worse, employment is.