This was written by Lance Hill, executive director of the Southern Institute for Education and Research in New Orleans and a member of the New Orleans Education Equity Roundtable .  In this post Hill writes about the final report from The National Study of CMO Effectiveness, a four-year effort aimed at assessing the impact of charter management organizations on student achievement and identifying practices and structure that are effective at raising achievement. The study was done by researchers from Mathematica Policy Research and the Center on Reinventing Public Education.

By Lance Hill

Miracles are by definition impossible to explain, so it is not surprising that a study of the secrets of the most miraculous charter management organization (CMOs) results in a few ethereal observations. The report discovered that successful CMOs emphasize high expectations, strict behavioral management, and teacher coaching. Methodology aside, this is not exactly the discovery of the Holy Grail of education.

The mantra of high expectations has been around for a long time and is the central pedagogy of Teach for America. That it exists is not doubted; that it works better than other methods has never been proven. Given the high attrition rates of putatively successful charters, one might conclude that high expectations also results in high frustration. But the method of high expectations is not unique to charters: it was pioneered by private schools for decades and often inculcated with a whack of the ruler.

The report then turns to the miraculous behavior management techniques of charters. Again, the rigid control of even the smallest behaviors is not a charter school innovation: San Quentin had the “silent rule” for prisoners several decades ago and it worked just fine.

Here the report lets the cat out of the bag: one of the keys to effective CMO behavior management is bribing students to behave. The report notes that “paycheck” of merit/demerit systems are the “backbones of culture-building efforts.”

At KIPP schools, one of the CMOs honored as “successful,” students are, on average, paid $40-$50 a week to incentivize compliant behavior. Again, no groundbreaking insight here: for decades this reward/punishment method has been used by parents in the form of a far more modest “allowance” which depended on complying with parent rules. Since the policy is not financially replicable if charters are brought to scale, then why is it cited as a model to be emulated?

But the most wafer-thin finding of the report is that successful CMOs engage in more coaching of teachers. First, frequent coaching of teachers can be a sign that charter teachers are poorly prepared for the profession. I know of one KIPP school that was largely staffed by novice Teach for America teachers and so had one vice principal for every 10 teachers who needed constant on-the-job training. But assuming that extensive coaching is necessary for a skilled teaching staff, do these CMOs coach more frequently and more effectively than other schools?

Page seven of the report graphs the differences in teacher observation and coaching time at successful CMOs, average CMOs, and public schools. One might expect significant differences between the miracle charters and the much maligned public schools. Instead, the CMOs are graphed at about seven observation-feedbacks a year while public schools are graphed at bout five. Not much difference there.

There are some other methodology questions with the study. While the study details the observation and feedback policies of the CMOs, it does not report whether or not these polices are actually practiced. There is no detailed data on the public schools reviewed and some of the “exemplary” CMO policies include the requirement that teachers submit a weekly lesson plan weekly — the norm at many public schools. Principal’s “feedback” events are defined as a principal observing a few minutes and then telling a teacher “good job.”

Interestingly, the study reports that in some areas, as much as 40% of the not-so-successful CMOs don’t engage in these “best practices.” By singling out what the authors regard as the most effective CMOs, we learn than nearly half the CMOs are not on the success trajectory.

If these are model “strategies for student behavior and teacher coaching,” one has to ask: where’s the evidence? What passes as “evidence-based science” in the report is the relatively higher-achieving CMOs are selected and then the study makes the enormous leap of faith that these CMOs’ practices must be effective. In Logic, this is identified as the classic causality fallacy, post hoc ergo propter hoc: “after this, therefore because of this.” There is no proof that a specific intervention causes a specific outcome.

But then again, miracles are defined as “an event that is contrary to the established laws of nature and attributed to a supernatural cause.” The word derives from the Latin word mīrārī which means “to wonder at.” By definition, there’s no science behind miracles and all we can do is behold them in wonder but never explain them.


Follow The Answer Sheet every day by bookmarking