In April 1998 the newspaper Education Week published an article by Herbert J. Walberg, a distinguished education researcher at the University of Illinois at Chicago. He trashed the reading program Success for All, used in 1,500 American elementary schools, as a failure posing as a success.

Nine months later the similarly distinguished American Institutes for Research (AIR) in Washington, backed by the nation's five major teacher and administrator organizations, said Success for All was one of the two most effective ways of teaching reading, particularly to disadvantaged children.

How can that be?

I know, I know. Stories about education research are rarely entertaining. When I suggest one to my editors, their noses twitch as if they have detected a slightly unpleasant odor. Most readers are drowsy by the second paragraph.

Still, parents like me want to deepen our understanding of what will help our schools. We know that research can be controversial. The press-release and phone-tree wars over phonics and conceptual math have led neighbors to stop speaking to one another and school board members to turn off their phones. But those debates are mostly among amateurs -- people throwing quotations at one another from books or articles they have read. The extraordinary disagreement about Success for All, which is run by a rapidly growing nonprofit foundation in Towson, Md., involves some of the most respected scholars in the country.

The program has some controversial features, such as having librarians and PE teachers conduct some classes and requiring that each instructor follow a script. But the sharp disagreement over its effectiveness confirms that, when it comes to our schools, even knowledgeable experts are incapable of giving consistent advice on what works, and what doesn't.

The longer the argument goes on, the uglier it becomes. The Phi Delta Kappan, a scholarly monthly for educators, reprinted Walberg's article, co-authored with doctoral student Rebecca C. Greenberg, in October along with a critique by California researcher Bruce R. Joyce calling part of it "slander, pure and simple." In the November issue of Educational Researcher, a Washington-based journal, University of Arizona researcher Stanley Pogrow accused Success for All of covering up bad student performance and suggested that the leader of the favorable AIR study, Rebecca S. Herman, was biased because she previously worked at Johns Hopkins University, where Success for All was born.

Walberg and Greenberg say Success for All researchers emphasize their positive rather than their negative results and have very few independent analysts looking over their shoulders. Meanwhile, Herman, who never worked for Success for All, applauds the program for at least trying to evaluate itself -- something few educational initiatives do. She notes how little money there is for independent evaluators and how much their questions cut into the days of already overburdened teachers.

Nearly everyone in this debate, I think, shares the inferiority complex that comes with being a social scientist. Some physicists and chemists say calling what Walberg and Herman do "research" is like calling what I do on Saturday mornings "playing tennis." The educational researchers and I are earnest and energetic, but we strain against insurmountable limits.

In a laboratory, a heated water molecule needs no inner motivation to start leaping about. In a classroom, a child's heart may resist all kinds of pedagogical stimulation. Success for All may teach Fred to read in record time, but do nothing for his twin sister, Frieda, making for very messy monographs in the latest issue of Educational Researcher.

Left to stew about this are parents, many of us hard pressed to remember the difference between a standard deviation and a sinus duct. One thing we can do is listen to educators we know and trust. They, at least, can tell us what seems to work with our children. Reaching out to a few teachers and parents we don't know can also help. Herman is pleased that Success for All and other programs insist that interested teachers first visit schools using the programs and ask questions.

The researchers will continue to argue over how to quantify our children's school lives. They have suggestions for randomly assigning teachers to different programs, or comparing new initiatives to historical patterns of improvement, or looking at results over many years and many schools.

But unless we start turning out human beings like clock radios, each set to report the news at the top of the hour, we are never going to have complete confidence in any new routine that promises better learners in less time. Is that so bad? The society we live in -- by most measures the most successful in history -- was built on undercooked ideas launched with little or no research. Educational arguments are useful and invigorating, but no one who wants to fix schools should wait until they are over before getting to work.

Jay Mathews's e-mail address is