I am moving my “Trends” column — a weekly online analysis I started in 2000 — from Friday to Wednesday to better catch the rhythms of blog readership. If you find obscure studies or developments that deserve wider attention, send them to me at mathewsj@washpost.com.

Douglas N. Harris, an associate professor of educational policy studies at the University of Wisconsin-Madison, is among the legion of economists who have provided some of the most interesting takes on the national school debate. I like his stuff because it often challenges prevailing wisdom, and is usually free of jargon.

He has a new book out, “Value-Added Measures in Education: What Every Educator Needs to Know,” which critiques conventional views of rating teachers by student test score gains. In the latest Harvard Education Letter, he breaks his argument down to its simplest parts — seven big misconceptions in what people like me say about value-added.

Here’s where we go wrong, with some of Harris’s views and my own complaints about the direction this debate is going.

Misconception 1: We cannot evaluate educators based on value-added because teaching is complicated.

Harris says the complex nature of teaching and learning is obvious, but value-added can bring some clarity. Student outcomes are just one factor, but an important one. My problem is that the focus on each teacher’s effect on student academic growth detracts from the team spirit that animates the best schools I know.

Misconception 2: Value-added scores are inaccurate because they are based on poorly designed tests.

Many tests are flawed, Harris says, but you can’t blame that on the value-added approach. If we had better tests, such as an assessment that caught the content of International Baccalaureate exams, we could “still use value-added methods with these richer assessments.” That sounds nice, but I think even my grandsons will be beyond IB age before we figure out how to do that reliably.

Misconception 3: The value-added approach is not fair to students.

This means that if we abandon our current system of calculating how many students reach proficiency, and instead assess how much each improves, students who have failed to achieve proficiency will be ignored. Harris says our current system is no better because it usually focuses only on students close to reaching proficiency. He is right.

Misconception 4: Value-added measures are not useful because they are summative [he does use some jargon — this means focused on how well teachers have done] than formative [focused on how to make them better.]

Harris concedes the point, but says value-added can be used with other measures to guide improvement. We need both summative and formative measures, he says. This, I think, overlooks the greater power of measuring yourself daily against your fellow teachers by trading thoughts about students.

Misconception 5: Value-added represents another step in the process of “industrializing” education, making it more traditional and less progressive.

The factory model of education, by this way of thinking, focuses too much on making every widget, and every student, the same way. Harris argues that “if policy makers concentrate on results, they can reduce the rules” that constrain imaginative educators and make schools more progressive. This topic makes me cross. It betrays an academic desire to categorize what schools are doing rather than see if they are helping kids.

Misconception 6: Because we know so little about the effects of value-added, we cannot risk our kids’ futures by experimenting with it.

“In a crisis,” Harris says, “the odds of making things better are high, lessening risk.” I don’t think many people suffer from this misconception. They realize that we once knew little about the first polio vaccines, which is why we needed to do experiments.

Misconception 7: Value-added is a magic bullet that will transform education all by itself.

Harris dismisses this quickly, which he should. I don’t know anyone who thinks this way.

States are moving without much delay toward a value-added measure of all students. I think the data this has produced in states such as Texas are useful when analyzing change across many schools, such as the growing use of Advanced Placement.

Harris presents all sides of the issue, but personally concludes that value-added can improve teaching and learning. I believe that, too. But using it to rate individual teachers, except in the privacy of a school principal’s office, is not likely to make schools better. Parents are going to misinterpret it, just as we do.

I think we should apply such measures to entire school buildings animated by teams of teachers, administrators, aides and janitors, and not to one teacher at a time. Am I missing something?