(iStock)

The late Gerald Bracey, once called “America’s most acerbic educational psychologist,” spent most of his time calling out bad education research and data, trying to explain that things did not always mean what the author said they did and that numbers were too often wrongly interpreted. He wrote a book about it, titled “Reading Educational Research: How to avoid getting snookered,” in which he was given that “acerbic” title by my Washington Post colleague Jay Mathews in the book’s foreward.

The book came out in 2006, but the issue remains as important as ever. Today, hardly a day goes by without yet another research study on some aspect of education being released, often with news releases topped with a headline declaring that something definitive has been found and the proof is finally here. Except too often it isn’t.

Some educational research is thoughtful and important, but there are myriad problems with a good deal of it, including insufficient samples, funders who want certain results, conclusions declared that aren’t borne out by the data, etc.  The problem is hardly a new one; in 1999, D.W. Miller wrote an article titled “The Black Hole of Education Research,” for the Chronicle of Higher Education that said:

Research on the effectiveness of reforms is often weak, inconclusive, or missing altogether. And even in areas illuminated by good scholarship, it often has little influence on what happens in the classroom.

The National Education Policy Center at the University of Colorado at Boulder often asks researchers to take a deep look at some published studies; its newest such research critique looks at a 2016 study on whether publicly funded private school choice programs save money. You can read the critique here.

Below is a post that critiques a charter school study that recently became news.

The title of the study being critiqued is  “In Pursuit of the Common Good: The Spillover Effects of Charter Schools on Public School Students in New York City,” and it finds potentially positive results from charter competition in New York.

The author of the post below is Carol Burris, a former award-winning New York high school principal who is executive director of the Network for Public Education, a nonprofit advocacy group. She has been chronicling problems with modern school reform and school choice for years on this blog. Burris was named the 2010 Educator of the Year by the School Administrators Association of New York State, and in 2013 the same organization named her the New York State High School Principal of the Year.

By Carol Burris

This was the headline of an article written by Beth Hawkins for The 74, a news service that has been funded in part by the charter school-friendly foundations of the family of Education Secretary Betsy DeVos as well as Bill Gates and Eli Broad: “When Charter Schools Open, Neighboring Schools Get Better: A New Study Finds 7 Reasons Why.”

In the article, Hawkins puts her own spin on the findings of a study by Sarah Cordes of Temple University entitled, “In Pursuit of the Common Good: The Spillover Effects of Charter Schools on Public School Students in New York City.” Hawkins calls the study “groundbreaking new research.”

Despite the attention the study received in The 74 and other charter-friendly media, the research is not exactly groundbreaking. In fact, a careful read reveals flaws in both its assumptions and conclusions. What follows is an overview of what the study tells — and doesn’t tell — us about the effects on public schools when a charter school is opened nearby.

The Study

Cordes attempted to measure the effects of competition from a charter school on the achievement, attendance and grade retention of students in nearby New York City public schools. In addition, she sought to identify the cause of any effects she might find.

Her research design compared outcomes of students who were first observed enrolled in a NYC public elementary school that was located within one mile of a charter school, with students who were first observed in a NYC public elementary school that was more than one mile from a charter school. She also examined school level data — demographic, spending and parent and teacher surveys — to determine the cause of any of the effects she might find. She subdivided the schools near charters into three groups depending on distance: co-located public schools (where a charter and a public school are housed in the same building), schools within .5 miles of a charter, and schools that were located between .51 and 1 mile of a charter.

Please note the term, “first observed.” This is an important, and in my opinion, troublesome feature of the report. If, for example, a student was first observed (a term not defined by Cordes) attending a traditional public school (referred to by the author as a TPS) that ever had a charter school located within one mile, that student was assigned to the within a mile of a charter school group, even if the student moved shortly thereafter to a public school beyond a mile. The only students excluded were those who moved to charter schools.

Why does this matter? A 2014 study by the Independent Budget Office of New York City found that only 61 percent of all public elementary school students[1] remained in the same public school after three years. The mobility rate was even higher than the average for students who are economically disadvantaged. That is a lot of movement and thus raises substantial sampling problems given the methodology used by this study.

The methodology Cordes used is called intent-to-treat analysis (ITT). It was developed for medical studies that randomly assign patients to treatment or placebo groups. Its purpose is to include the outcomes of those patients who are non-compliant with the treatment or who withdraw due to side effects or other difficulties.

However, in New York City students are not randomly assigned to public schools, nor is there any evidence that families move to withdraw from the “treatment” of the local public school being located near a charter school.  How all of these factors affected the results is unknown.  The bottom line is that the use of ITT creates more design problems in this study than it solves.

Findings Regarding Achievement

Upon completing her analysis, Cordes concludes that “the introduction of charter schools within one mile of a TPS increases the performance of TPS students on the order of 0.02 standard deviations (sds) in both math and English Language Arts (ELA).”

To put that effect size in perspective, if you lower class size, you find the effect on achievement to be ten times greater (.20) than being enrolled in a school within one mile of charter school. Reading programs that focus on processing strategies have an effect size of nearly .60. And direct math instruction (effect size .61) with strong teacher feedback (effect size .75) has strong benefits for math achievement[2].  With a .02 effect size, the effect of being enrolled in a school located near a charter school is akin to increasing your height by standing on a few sheets of paper.

The results from the three sub-groups based on distance showed the same lackluster results — Cordes found no statistically significant increase in learning for schools in the range of half a mile to one mile, and only a .02 increase in schools at the 0 to half mile mark. Co-located schools showed an increased effect size in math of .083 and an increase in ELA of .059. The effect for ELA for co-located students was significant at the .10 level — a level so generous few researchers use it to claim significance.

Reporting the Cause

Although it appears that Cordes found very small achievement gains in a public school if a charter is located within a half mile, that correlation does not tell us why those gains occurred. To answer that question, Cordes looked at an array of factors — demographics, school spending, and parent and teacher survey data about school culture and climate.

There was only ONE standout out factor that rose to the commonly accepted level of statistical significance — money.

Public schools co-located with charters had an 8.9 percent increase in instructional expenditures and a 35.3 percent increase in spending on other staff. Public schools located within half a mile experienced a 4.4% increase in instructional spending. Those findings were statistically significant at a rigorous level — .001.

What about the factors of school culture and climate? Did competition make the public school emulate the charter and thus improve?

Cordes’s analysis of the survey data on parental perceptions of academic expectations, communication, engagement of parents and students, cleanliness and safety showed a mixed bag.

And yet this was the conclusion of the researcher.

There is suggestive evidence that after charter school entry, parents report significantly higher student engagement and parents in co-located schools also report significantly lower levels of the school being unsafe (Table 7) [sic Table 8). While none of the other indicators are statistically significant, in general they are positive and monotonically increasing with charter school proximity. Although no specific components of these indices are statistically significant, the direction tends to indicate improved perceptions after charter entry (Online Appendix Table B.6).

First, it is deceiving to say that parents reported “significantly” higher engagement and safety when the results were NOT statistically significant. The descriptor, significant,  has a very specific meaning in research. In addition, there is no such thing as “suggestive evidence” in research. Either your results reach statistical significance or they must be attributable to chance.

Second, the survey results showed that for about half of the factors, parents reported increased negative perceptions when a charter was located nearby. Only one response reached the lowest commonly accepted .05 level of significance — parents in co-located schools reported that parent engagement actually went down. And yet this was the conclusion of the researcher in her study’s abstract.

Potential explanations for improved performance include increased PPE, academic expectations, student engagement, and a more respectful and safe school environment after charter entry.

That statement is unsubstantiated and speculative based on the study. The only potential explanation that can scientifically be made is the one statistically significant factor — more spending.

The bottom line is that Sarah Cordes found what every researcher before her found — “competition” from charters has little to no effect on student achievement in traditional public schools. It also found that when it comes to learning, money matters as evidenced by increased spending, especially in co-located schools.

Most reporters generally lack advanced skills in research methods and statistics. They depend on abstracts and press releases, not having the expertise to look with a critical eye themselves. But it does not take a lot of expertise to see the problems with this particular study.

[1] The study only includes elementary students

[2] Effect sizes from: Visible Learning: A Synthesis of Over 800 Meta-analyses Relating to Achievement by John Hattie (2009.)

You can read more here: