wpostServer: http://css.washingtonpost.com/wpost

The Post Most: Local

Answer Sheet
Posted at 01:13 PM ET, 06/24/2011

Policy makers ignore the teachers — again

The Maryland Council for Educator Effectiveness ignored the classroom teachers on the panel when devising a new teacher evaluation system.

It should no longer shock me when classroom teachers are entirely ignored in education policy (there wasn’t a teacher in the big bunch that wrote the No Child Left Behind law, for example). But I expected better from Maryland.

Here’s what happened: Earlier this week, the the Maryland Council for Educator Effectiveness voted on a new way for schools to evaluate teachers and principals that ties 50 percent of each assessment to what are technically called “student growth measures” but which actually are scores on standardized tests.

The scheme, promised by the state in its successful application for $250 million in federal Race to the Top funds, is being field-tested in seven Maryland school districts next year as it is readied for statewide implementation.

Assessment experts have warned against linking evaluation to standardized test scores, saying it is unreliable and invalid for a number of reasons (see here). And districts, including the educational powerhouse Montgomery County, have their own evaluation systems that work well; Montgomery’s is known nationally for including teachers in the evaluation process.

When it came time to discuss and vote on the evaluation system, all of the classroom teachers on the Maryland Council for Educator Effectiveness were opposed. But a vote was taken to end debate and approve the recommendations. That passed 13 to 7; the dissenting votes were the six classroom teachers and the Montgomery County Board of Education President Christopher Barclay (Silver Spring).

So much for what teachers think.

The council, established in June 2010 by Gov. Martin O’Malley, is composed of six teachers or teacher representatives; the state superintendent of schools; two principals; two school administrators; two members of local school boards; a business community representative; a member of the state Board of Education; a higher education representative; two at-large members with education expertise; a member of the Maryland Senate; and a member of the Maryland House of Delegates.

Here’s the letter that the teachers sent after the vote on Monday to Gov. Martin O’Malley and other officials about why they oppose the evaluation plan:

To: The Honorable Martin O’Malley

The Honorable Thomas V. Mike Miller

The Honorable Michael E. Busch

Mr. James DeGraffenreidt

From: Betty Weller, Vice President, Maryland State Education Association, Co-Chair

Bridgette Helen Blue, Teacher, Prince George’s County Public Schools

Cheryl Bost, President, Teachers’ Association of Baltimore County Public Schools

Maleeta Kitchen, Teacher, Howard County Public Schools

Dawn Pipkin, Teacher, St. Mary’s County Public Schools

Lee Rutledge, Teacher, Baltimore City

Date: June 20, 2011

RE: Concerns with the Council’s Initial Recommendations

The Maryland Council for Educator Effectiveness met this morning to advance initial recommendations in the development of a statewide educator evaluation system. Today’s meeting concluded nearly a year’s worth of work, which can never fully be summarized in a report. However, the initial recommendations failed to include key points and provisions that must be in any evaluation system. Despite efforts to raise these issues, a vote that cut off debate silenced the voices of educators on the Council, and the Council failed to earn the support of a third of its members as a result.

That the Council’s initial recommendations failed to earn the support of any classroom educators—the same people who have the clearest understanding of the impact of these recommendations on children and public education—should raise concerns about the potential impact of implementation. We plan to submit a more detailed report on our concerns and hope to work to address these concerns with the Council when we reconvene.

We remain focused on the goals of the Council and this process as a whole: raising achievement and improving teaching and learning. We’ve been engaged throughout this process in a collaborative and open way, sharing the best perspectives of educators—from our own expertise and classroom experience to facilitating presentations by outside experts like Charlotte Danielson and Laura Goe.

However, we are concerned that the report veered from the Council’s charge, such as in its statement that the report was in alignment with the Education Reform Act of 2010. In fact, the Council had no such discussion or agreement. Furthermore, language in the report attempts to establish General Standards, when clearly the Council’s charge was to make recommendations on the establishment of a default model. This overstep limits the very flexibility for local systems to develop their own evaluation systems—flexibility that was emphasized in the Council’s discussions, but not in the draft recommendations. Moreover, we are very concerned that although we provided comprehensive feedback on the initial draft of the report, the majority of it was not reflected or included in the final draft of the report. We were not provided with any explanation for why our feedback was ignored.

Before the Maryland State Board or any other governing entity acts on recommendations from the Council, policymakers must be mindful of the concerns raised in Council deliberations that are not addressed in the initial recommendations.

Our expectations for such an evaluation system that will improve teaching and learning include:

* Providing specific findings that can generate a plan of improvement for the individual teacher or principal. If the evaluation process cannot identify what has to change about a teacher’s professional practice for improvement, then something is wrong with the system.

* Flexibility and appreciation for how all classrooms, schools, and districts are different.

* Ensuring components of student growth measures are appropriate for use (psychometricians achieve this through validity and reliability tests).

* Fair and timely assessment of the evaluation system itself, including an assessment of the validity and reliability of local and state measurements. Good evaluation systems create an ongoing process informed by multiple measures and constant feedback, and we must provide the same kind of thorough assessment of the systems themselves.

As we put this system into practice in the seven pilot counties, it is more important than ever that we value the voices of classroom practitioners. We expect that many lessons will be learned and experiences shared from schools across the state as we enter the pilot phase. We urge all education policymakers to listen to these voices and improve upon the Council’s initial recommendations by embracing the spirit of flexibility, collaboration, creativity, and inclusiveness that was present in the Council’s creation and deliberations but was less evident in its final report. Navigating the pathway to improved teaching and learning and increased achievement demands that we remain engaged in a process of constant feedback and reassessment to ensure that we foster a system which is best for our educators, schools, and students.

Follow The Answer Sheet every day by bookmarking http://www.washingtonpost.com/blogs/answer-sheet. And for admissions advice, college news and links to campus papers, please check out our Higher Education page. Bookmark it!

By  |  01:13 PM ET, 06/24/2011

 
Read what others are saying
     

    © 2011 The Washington Post Company