This was written by Peter Smagorinsky, Distinguished Research Professor of English Education at The University of Georgia and a Fellow of the American Educational Research Association. Here he critiques a report that was published by the American Enterprise Institute in August and titled, “Grade Inflation for Education Majors and Low Standards for Teachers: When Everyone Makes the Grade.” The title tells you exactly what’s in that report.

Following this post is a response from Assistant Economics Professor Cory Koedel of the University of Missouri, author of the report that Smagorinsky critiques.

By Peter Smagorinsky

I began teaching in 1976, first in high schools and ultimately in teacher education programs. Much has changed in my 35 years as a teacher, but one thing remains constant: I have always held myself accountable for my students’ learning. When students have done poorly in my classes, I have tried to understand how I could have taught the class better in order to produce richer learning. When they have done well, I have assumed that my annual adjustments have worked enough so that students grasped the course content and learned how to engage with it in their writing. Although some students have done poorly no matter what I’ve done, I’ve always tried to make myself responsible to a great degree for what students learn and how their grades reflect that learning.

However, in an August 2011 report written by Economics Professor Cory Koedel at the University of California at San Diego, a very different set of assumptions is at work.

To Koedel, who is the latest in the current wave of educational experts who have never taught in a K-12 school, when education students get good grades, it’s because the teacher has low standards, not because the teacher has worked hard to insure rich learning and high-level academic performances. Koedel’s report was published by the American Enterprise Institute, a think tank in Washington D.C. dedicated to “expanding liberty, increasing individual opportunity, and strengthening free enterprise.” The institute says it values “independent thinking, open debate, reasoned argument, facts, and the highest standards of research and exposition,” yet all of their publications seem to reach the same conclusion: that free market solutions solve all problems.

I’m no economist, so I can’t say how the sort of open educational market that Koedel embraces would actually work. Yet though Koedel is no expert on schools, he and other entrepreneurs think that they know my business better than I do.

People who don’t understand schools tend to find them easy environments to manage. Everything gets reduced to simple statistics that tell the whole story (for the most part, multiple-choice standardized test scores). It doesn’t matter what the circumstances are: Facts are facts and figures are figures, and if you find the particular set of facts and figures that they favor to be problematic, then you are part of the problem. 

Koedel’s beliefs rest on his finding that students in education classes, since 1960, have been awarded higher grades than students in other university disciplines. He bases this conclusion on two studies, one from 1960 and one that he has conducted more recently. If two studies find the same thing a half-century apart, he reasons, then everything happening in between — not to mention before and after — must be that way as well.

 Koedel illustrates his belief about low educational standards with an anecdote about a school administrator who believes her teachers are all doing well until pressed to say whether or not she’d want her own children taught by them. The school administrator’s answer that ‘no, she would not,’ enables Koedel to condemn university education programs across the nation over a 50-year span, and by implication, forever and beyond.

 Using anecdotal evidence, I could prove just about anything. My son is an economics major in college and complains that his economics professors are terrible teachers because they don’t explain the concepts clearly and because they evaluate him by means of tests instead of by more realistic and complex problem-solving that requires the application of economics concepts. Using Koedel’s reasoning process, I could conclude that economics professors are universally, and always have been, lousy teachers because they give low grades based on poor instruction and misplaced assessments. My son’s belief in their ineptitude proves it, because single anecdotes provide conclusive evidence.

 Koedel’s reasoning throughout his report is specious. He says at one point, “I am not aware of any rigorous evidence that explicitly links higher grading standards in education departments to improved teaching performance in K-12 schools. But this does not mean a link does not exist” — he just hasn’t found evidence to support it yet. He then asserts the link as a fact. A stock broker friend of mine once told me that people throwing darts at a random map of corporations could predict the market as well as most trained economists do. I’ve never found evidence to support this assertion, but it looks like a fact to me if recent market forecasts are any indication, and that’s good enough for Koedel, so it’s good enough for me.

Here’s another example of his reasoning: “Undergraduate education majors become teachers, teachers become principals, and principals become district-level administrators,” proving to Koedel that easy grading in university teacher education programs (or at least the two he features in his article) leads to lax standards all the way up the ladder. I suppose that he assumes that nothing intervenes in the 20-30 years between being a college kid — perhaps with conscientious education faculty whose good teaching produces rich learning and thus high grades — and running a school district. I imagine it also enables me to blame the recent Wall Street crisis on incompetent university economics professors and the dysfunctional ethical and intellectual culture they foster.

 To Koedel, “The fundamental problem is simple: there is no pressure from competitive markets in education.” First, he sees the problem and solution as “simple,” and anyone who thinks that operating schools effectively is simple is an ignoramus. Second, he’s wrong. There are plenty of alternatives, from homeschooling to private schools to transfer options to alternative schools to changing teachers to dropping out and working. But that fact is inconvenient to his simplistic belief in free markets. He also asserts that “the solution, as with any market failure, is external intervention.” I wonder how the American Enterprise Institute feels about solving problems in the corporate world by means of external regulation.

Schools of education can surely be improved; too many teachers complain about the ones they attended to think otherwise. I attended one awful teacher education program, and one great one, and I know the difference. The horrible one relied on droning lectures and multiple choice tests, and the outstanding one required extensive written work in relation to challenging texts and problems through which we synthesized theory and research into the sorts of teaching ideas that produce rich and complex learning that cannot be simplistically assessed.

From what I can see, schools of economics could use some work as well, given that they appear to condone instruction and assessment remarkably like that of the bad teacher education program I attended. At least education faculty don’t have the temerity to think that they can fix either university economics departments or Wall Street with simplistic, untested, uninformed solutions in relation to problems about which they have little knowledge or experience. Physician: Heal thyself, or at least stick to healing illnesses about which you have at minimum a vague understanding.


Response from Assistant Economics Professor Cory Koedel from the University of Missouri:

Professor Peter Smagorinsky from the University of Georgia recently wrote a commentary about my research that was posted in this blogspace My research shows that the grades that are awarded to students in undergraduate education classes are consistently higher than the grades that are awarded to students in other classes. Professor Smagorinsky does not view this as an important concern; I do. Our respective positions on this issue can be found easily enough by the interested reader, so I won’t rehash them here. However, there are several statements in Professor Smagorinsky’s commentary that, in my eyes, are inaccurate and/or misleading. I felt compelled to write this note in response.

First, here are my major concerns with Professor Smagorinsky’s commentary:

1) He uses an anecdote from the policy report out of context. He claims that I use the anecdote to “condemn university education programs across the nation over a 50-year span, and by implication, forever and beyond.” The hyperbole notwithstanding, the anecdote in question was actually a lead in to a discussion about consistently low evaluation standards for teachers in K-12 schools. In the report I suggest that the low grading standards in education schools contribute to a culture of low standards in education, which is evidenced in part by the low evaluation standards for schoolteachers.

As far as the claim that evaluation standards are low for teachers in K-12 schools, I do not rely on anecdotal evidence to support this claim (as is suggested by Smagorinsky). In the policy report, and the corresponding academic article, I cite several studies showing that evaluation standards for teachers in K-12 schools are low (Smagorinsky did not appear to read the academic article judging from several of his comments, despite it being prominently cited in the policy report.).

The most accessible study was published by The New Teacher Project and is titled “The Widget Effect: Our National Failure to Acknowledge and Act on Differences in Teaching Effectiveness.” Anyone who is interested in this topic can browse the TNTP report for 15 minutes and have a much better understanding of the issue. Citations to other studies can be found in the policy report and in my academic article.

And there are other examples as well: A recent article by Brian Jacob (NBER Working Paper No. 15655) reports the following statistics about teacher evaluations in Chicago Public Schools in 2007: 15/11,621 teachers were evaluated as unsatisfactory, 641 as satisfactory, and the remaining 10,965 were rated as either “excellent” or “superior.” What do the words “excellent” and “superior” mean to the evaluators?

Related to this issue, Smagorinsky takes exception to this reasoning in my report:

“Undergraduate education majors become teachers, teachers become principals, and principals become district-level administrators. Ultimately, a sizable fraction of the workforce in the education sector is trained in education departments where evaluation standards are astonishingly low. Should we be surprised that low standards persist in K–12 schools?”

The intuition is that if educators are exposed to low standards in college, their experiences there will persist into the workforce. This idea has academic grounding. Here is a direct quote from a well-cited book in the performance appraisal literature by two psychologists (Performance Appraisal: An Organizational Perspective by Kevin R. Murphy and Jeanette N. Cleveland, p. 197):

“In an organization where the norm is to give high ratings, the rater who defies the norm might experience disapproval from his or her peers…pressures for conformity may be a significant factor in rating inflation.”

This citation is available in the academic article, along with additional information.

2) At one point in the report I write: “I am not aware of any rigorous evidence that explicitly links higher grading standards in education departments to improved teaching performance in K–12 schools. But this does not mean a link does not exist.” There are two issues here.

First, Smagorinsky again takes this statement out of context in his commentary. In my report, this statement and the text that follows (omitted by Professor Smagorinsky) tries to briefly explain why there is no rigorous evidence on this topic. The problem is that there is no variation for empirical researchers to use to evaluate grading standards in education schools – the inflated grades seem nearly universal.

Smagorinsky goes on to say that I “assert this statement as fact.” This is simply not true. As I wrote, there is no evidence. I would love to know if and how the low evaluation standards affect teaching performance. In no place throughout any of my work do I indicate that a clear link exists between teaching effectiveness and these low standards. I do believe that all of the indirect evidence suggests that, if anything, the low standards will have negative consequences for teacher quality in K-12 schools, but I never make the erroneous assertion suggested by Professor Smagorinsky. We don’t know how the low standards for educators affect teacher quality. I think we should be trying – very hard – to figure this out.

3) Smagorinsky also misinterprets this statement in my report: “The fundamental problem is simple: there is no pressure from competitive markets in education.” He writes that I “see the problem and solution as ‘simple.’ ” In fact I only see the problem as simple, which is what I write. Simple problems can have complex and evasive solutions. For example, the problem that prevents us from traveling to planets in other solar systems is simple: we cannot travel fast enough. The solution is not simple at all.

In the policy report I suggest two solutions to the grade inflation problem in education schools. Note that neither of the solutions involves privatizing education, which is the solution that Smagorinsky implies that I have in mind. Instead, they both take the non-competitive education system as a given. On a personal note, I do feel that adding more incentives in education would aid with this particular problem, and with other problems; but I agree with Smagorinsky that this is a complex issue (correspondingly, there is an entire research literature devoted to evaluating competitive effects in education that is far beyond the scope of this work).

Finally, several smaller issues merit brief mention. First, Smagorinsky writes at one point that my article features just two education departments. I refer Smagorinsky, and the interested reader, to my academic article cited above. It shows that this is not an issue in just two schools; in fact, I have yet to find an education department where the grading discrepancy doesn’t exist. If Smagorinsky knows of one, I would love to hear about it. Also, there are other studies suggesting that there are negative consequences associated with these favorable grades.

One of my favorites is by Peter Arcidiacono and it can be found here. The article is a bit of a bear, but if you flip to Table 3 you’ll see that a supplementary finding in the Arcidiacono study (he isn’t particularly interested in teachers like I am) is that non-education students with the lowest SAT scores select into education programs over time in college, while education students with the highest SAT scores select out of education over time. You can see my commentary about this issue here.

The bottom line is this: Students in education classes receive much, much higher grades than students in every other academic discipline. These are our future teachers. Some of us think the favorable grades may have negative consequences for kids in K-12 schools down the line (myself); others do not (Smagorinsky). I believe that there is a professional obligation among education researchers to fully vet this issue.


And here is Smagorinsky’s response to the response:

By Peter Smagorinsky

I appreciate this opportunity to respond to Dr. Koedel’s response to the essay I wrote in relation to the article that the American Enterprise Institute published based on his study of education school grading trends. In this article he takes studies conducted a half-century apart that both find what he considers to be grade inflation, which he then argues, using causal reasoning, produces grade inflation in schools and lax evaluations of teachers by school administrators who got their initial credentials in education schools. In my mind this reasoning is patently ridiculous, and I will simply refer readers to his AEI report, my original critique, and his subsequent response.

Dr. Koedel sidesteps my analogy to schools of economics and the current sorry state of Wall Street and the national economy. Using his logic, I assert that one could just as easily argue that terrible teaching and a lack of ethics among economics professors produces investors and financial managers who ultimately populate the profession and are thus responsible for the Depression our economy is approaching, which is a much greater threat to democracy than are teachers awarding good grades to students, possibly because they are conscientious and teach so that the greatest number of students learn well and thus earn high grades.

I recently spoke with a colleague in UGA’s engineering school who scoffed at the idea that giving bad grades is a sign of high standards, in spite of pressure he receives from departmental colleagues to do just that. In my discipline, to the contrary, giving lots of bad grades is a sign of bad teaching.

I would like to return to Dr. Koedel’s assertion that students’ last three semesters of college (where they focus on their education major, spending much of this time in the field —mi.e., in schools as much as in university classrooms) indelibly shape their lifelong approaches to teaching and learning. Teacher candidates who go through the traditional education school route first spend 12-15 years learning about teaching through their exposure to K-12 teachers. They then spend their first five semesters of college taking general education coursework and, for those getting certified to teach middle and high school, take 8-15 courses in their content area of specialization.

By and large, these courses are taught by means of lectures and textbooks, leaving students in the mind-dulling role of listening and repeating. Studies of students’ attention during lectures show that, even in elite settings, most of them are daydreaming. Czikszentmihalyi and colleagues found that during one teacher’s lecture about Genghis Khan’s invasion of China in an honors high school class, a total of two of the nearly 30 students were thinking of anything remotely Chinese: one was thinking of Chinese food, and the other was wondering why Chinese men wear their hair in ponytails. Bad teaching thus produces inattentive and indifferent learners, who are then given low grades because the teaching has not produced learning.

But I digress. My point, which my own studies of the development of novice teachers support, is that if anything, students’ experiences in teacher education programs are far too fleeting and easily overwhelmed by other, more pervasive experiences and the general authoritarian school culture to have a great impact on early career teachers’ pedagogical practices; this fact is probably the greatest frustration faced by education faculty.

Even while taking education coursework, teacher candidates are immersed in school cultures via field experiences that tend to contradict most of the values and practices emphasized by education faculty. To claim that three semesters of education courses, themselves undermined by the field experiences valorized by accreditation agencies, creates a culture that persists throughout the course of a school-based career overlooks pretty much all psychological research on concept formation and learning processes, which is a focus of my own research.

Dr. Koedel says in his response to my critique that his research finds that anyone looking at education needs to “take the non-competitive education system as a given,” a claim I believe I have quoted well within context. Here he and I clearly part ways, because I don’t see the awarding of grades as a competition where there are winners and losers. Like many educators, I share the belief that high levels of performance are available to all students, even if not all students rise to the occasion. I also believe that teachers who take the competitive approach of assigning lower grades to strong performances in order to impose a competitive agenda on students are patently unfair.

Perhaps, as Dr. Koedel believes, SAT and other test scores are ideal indicators of knowledge and teaching ability and further indicate that education schools are for over-rewarded dummies, although that’s never been my experience.

In the program in which I teach, which is located in the state of Georgia’s most demanding comprehensive university, we have a 60% acceptance rate of applicants, with undergrad GPAs prior to program admission averaging about 3.7 on a 4.0 scale. Most of them were in honors, AP, International Baccalaureate, gifted and talented, or other elite track throughout high school; did exceptionally well in college prior to admission to our program; continue to do well once enrolled; and then return to schools to teach.

People who really want to go into teaching tend to do well in courses that prepare them for such a career; it’s too discouraging a profession for teacher candidates not to be pretty passionate about the work and their university preparation to do it. Combined with teacher education instruction designed to teach teacher candidates effectively by conscientious faculty who do not regard their work as separating the grain from the chaff via an imposed grade distribution, this intense disposition to teach well may produce high levels of achievement and thus good grades.

With these remarks, I am not making a blanket endorsement of all schools of education, some of which by reputation are quite wretched and indefensible. I have been fortunate to have gotten credentialed initially at a university that pushed me pretty hard (the University of Chicago), and to have returned there for my doctoral studies. I’ve taught at two flagship state universities (Oklahoma and Georgia) that tend to attract the best and brightest from their state’s pool of students. As I noted in my original response, I also attended one joke of a teacher education program for a semester, and so am aware of why they might be considered fluffy and of questionable value. Like any other discipline, teacher education is characterized by a range of quality. Lumping them all together based on thin evidence seems to me to be intellectually irresponsible.

I will close by re-asserting a point made in both of the essays I’ve contributed to this discussion: Education schools cannot be validly asserted as the source of grade inflation in schools any more than economics departments can be validly asserted as the cause of Wall Street’s meltdown. The logic is bad in both scenarios; the situations are both far too complex for such facile analyses. Neither the problem — as asserted by Dr. Koedel — nor the solution is simple. It’s my sincere hope that those who contribute to policy discussions and decisions are aware of the extraordinarily complex issues at stake and the perils of treating any aspect of them as simple.


Follow The Answer Sheet every day by bookmarking And for admissions advice, college news and links to campus papers, please check out our Higher Education page. Bookmark it!