Earlier this month I published part of a policy memo from the National Education Policy Center at the University of Colorado Boulder under this headline: “No Child Left Behind’s test-based policies failed. Will Congress keep them anyway?”
The memo — written by Kevin Welner, director of the NEPC, an attorney and a UC Boulder professor of education policy, and William J. Mathis, NEPC managing director and a former Vermont superintendent — explores how the debates about the reauthorization of the Elementary and Secondary Education Act (which in its current version is NCLB), ignore the harm caused by NCLB’s test-based reforms. The policy memo starts this way:
Today’s 21-year-olds were in third grade in 2002, when the No Child Left Behind Act became law. For them and their younger siblings and neighbors, test-driven accountability policies are all they’ve known. The federal government entrusted their educations to an unproven but ambitious belief that if we test children and hold educators responsible for improving test scores, we would have almost everyone scoring as “proficient” by 2014. Thus, we would achieve “equality.” This approach has not worked.
You can read my original post here, and the fully annotated memo here, at the National Center of Education Policy website.
Taking issue with the policy memo is Mark Dynarski, a George W. Bush Institute education fellow and president of Pemberton Research in New Jersey. The Bush Institute was founded in 2009 by the former president and his wife, Laura Bush, as a public policy center in Dallas. No Child Left Behind was the chief education initiative of Bush’s presidency (so it is no surprise that an education fellow at the institute would take issue with the memo). To further the discussion around the legacy of NCLB, I am publishing his response — which argues that the evidence suggests test-based policies have not failed — along with reaction by the authors of the policy memo, who say otherwise.
Here it is, and following that is a Welner-Mathis response to Dynarski.
From Mark Dynarski:
Kevin Welner and William J. Mathis wrote recently on this blog that No Child Left Behind’s “test-based policies” have failed. This view is more rhetorical than real. Let me offer several reasons why.
*Welner and Mathis say that the “approach hasn’t worked” and shown only “very small” gains.
Long-term trend scores on the National Assessment of Educational Progress show something other than “very small gains.” Instead, they show significant gains.
For example, NAEP scores show a gain of the equivalent of two grades in reading for Hispanic and African-American nine-year olds from 1999 to 2008. Similarly, NAEP math scores of nine-year old Hispanic students improved by the equivalent of two grades from 1999 to 2008. They increased for African-American nine-year olds during that period by the equivalent of about one-and-a-half grades.
These are substantial improvement in core subjects. The gains affect tens of millions of students and translate into trillions of dollars of economic growth, as Eric Hanushek of Stanford University has shown. The authors say they are for evidence-based approaches and substantive strategies to improve education. Yet, that’s what accountability is. The gains occurred when states started using annual tests and assigning consequences to schools for results. Since that time, studies using rigorous scientific designs have shown that consequences mattered.
*They assert that annual tests have caused over-testing.
Eliminating annual tests will “solve” the problem of over-testing the way raising speed limits “solves” the problem of speeding and careless driving. You may not be speeding, but you could still be driving too fast. In short, you still have a problem, just like getting rid of tests will not solve education’s problems.
The solution is to not over-test. That some districts are over-zealous is not an argument to reverse national policy.
*They claim accountability has had negative consequences on teaching.
Actually, MetLife’s annual survey of teachers shows teacher morale rising during the years of No Child Left Behind. Morale began declining when tests began being used to evaluate teachers. Using test scores to evaluate individual teachers was not part of No Child Left Behind.
Their assertions that teaching values and skills and problem solving are being “marginalized,” that testing is “fetishized,” that there’s a “singular” focus on achievement outcomes “to the exclusion of focusing on children’s opportunities to learn” are hyperbole. It is bold to claim teachers have drained their classrooms of creativity and enthusiasm for learning, and no longer help their students develop their individual potential and grow into responsible adults. Bold claims like this should be backed by evidence.
*They claim that accountability has harmed the nation’s “democratic and economic goals and those of the citizenry.”
They offer no evidence of this claim and it’s backward. An educated and skilled citizenry is the key to democracy and to economic growth. Trying to improve learning helps meet democratic and economic goals.
*They assert that test scores are limited as a measure of a student’s learning.
This point often comes up in these debates. They do not put forward an alternate measure, one that is informative to parents and comparable between schools and districts. Are they arguing that what it means to learn is so vague and multi-faceted that it can’t be measured? If so, they should follow their logic and call for an end to teachers giving students grades and an end to states having standards of what students should know.
*They assert that the law is all about “punitive interventions.”
It is curious to apply the label of “punitive” to using test results to identify which schools need improvement. Aren’t parents and students the ones being punished by a lack of learning? You are a parent, your child’s school is not improving, your child is not learning, and tax dollars from you and others are draining away. Faced with this situation, which parent would not want to see something done to improve the school?
*They claim education needs more resources.
Education spending rose steadily until the economic crisis. It’s now rising again. Some states and districts now spend more than $20,000 a year per student. These are not necessarily affluent districts. High-poverty districts like Newark, New Jersey spend that much, too. District of Columbia schools reported revenues of $27,000 per student. There are always plenty of ideas and enthusiasm for spending more money, but let’s at least note that a lot already is spent.
They mention early childhood education, community schools, and after-school and extended learning time programs. States can fund these initiatives and some already do. How does accountability get in the way of efforts by states to help their students, such as by expanding early childhood education? It does the opposite. Accountability creates a demand for effective ways to support learning. And states can design initiatives that suit them.
Eviscerate accountability and taxpayer generosity may dry up. They are being asked to foot a larger bill and to take for granted that it will all work out. They will be giving more money in return for less information about what becomes of it.
Welner and Mathis have it half right. We do indeed make our greatest progress when we invest in our children and our society. In the last 15 years, that investment included the important idea of accountability for results. Investment with accountability has yielded unprecedented gains in academic achievement, and we are now seeing the highest graduation rates in the nation’s history.
Efforts to scrutinize evidence of what worked and build on it may lack excitement, but it’s essential for progress. Let’s use what we’ve learned in the last 15 years to correct flaws in the law and move it to the next level.
Here’s a response to Dynarski from Welner and Mathis:
Two weeks ago, we published an NEPC Policy Memo explaining and discussing the broad research consensus that test-based reforms (specifically NCLB) have been “a demonstrably ineffective strategy.” In the ensuing ten days, over two thousand education researchers endorsed our findings and called for a shift in U.S. education policy.[i] Pushing back against this consensus, the Bush Institute’s Mark Dynarski wrote a rebuttal—a defense of test-based reforms and, specifically, of NCLB.
We certainly don’t begrudge the Bush Institute for defending President Bush’s signature education law at a time when it’s under such widespread criticism. But we could find nothing in the rebuttal to prompt a reconsideration of our conclusions.
The Dynarski rebuttal raises several issues, but the main theme is his belief (he offers almost no citations to research) that test-based reforms have in fact been an effective strategy. Below, we consider his arguments.
Our NEPC Policy Memo briefly discussed research from Jaekyung Lee, the National Research Council and others concluding that NCLB and similar reforms had done little or nothing to improve the results of the National Assessment of Educational Progress (NAEP). Dynarski, too, relies on NAEP results for his contentions, so we’ll discuss NAEP in somewhat more detail below. But we also want to stress that arguments over small differences in interpretation of those results are merely distractions from the larger conclusions about the ineffectiveness of test-based accountability reforms.
Has test-based reform worked? NAEP Trends – Dr. Dynarski argues that the NAEP “Long-term Trends” testing results show “substantial improvement in core subjects.” This particular NAEP exam focuses largely on basic skills in math and reading, and it dates back to 1971. Dynarski objects to our characterization, as “very small,” of any uptick in trends for these and other NAEP results.
Fortunately, the NAEP Trends Report is publicly available. On Page 1 of the Executive Summary, anyone can view the trend lines, which are flat to mildly increasing for the entire time period from 1971 to 2012. Dynarski points to the time period from 1999 to 2008 and specifically to the 9-year-old test takers. But as Lee and Reeves (2012) demonstrate, using data from the main NAEP (allowing state-level comparisons), the previously existing positive and improving trend line did not accelerate when the effects of NCLB would be expected to have come into play. Moreover, the 9-year-olds that Dynarski focuses on are the best-case—the trend lines for 13-year-olds or 17-year-olds are distinctly less impressive.
Any improvements in NAEP trends, therefore, are small and not broadly seen across age levels. But we also are surprised by how confident Dr. Dynarski appears to be in his decision to attribute any gains to NCLB. Consider the causal assumptions necessary to reach that conclusion:
- While Dynarski points to the time period from 1999 to 2008, the real gains for these 9-year-olds were between the tests administered in 1999 and 2004 (see the Trends Report cited earlier).
- NCLB became law in 2002, and its full effects weren’t felt immediately, so most education taking place between 1999 and 2004 was unaffected or only partially affected by NCLB.
- This was also the time when early, primary, and ELL education were being given significant boosts in many states. NCLB’s passage itself was accompanied by a short burst of increased federal spending.
- The accountability consequences in states were not immediately felt, in part because states gamed the system and in part because the NCLB law was designed to delay those impacts. While Dynarski writes that “rigorous scientific designs have shown consequences matter,” he fails to share these sources.
The Cato Institute’s Neal McCluskey gave testimony before Congress about the NAEP trends, walking through these points in more detail than we’re doing here. But his conclusion is worth quoting:
Given all this, at best one can say that No Child Left Behind may have had some positive effect on underserved 4th and 8th graders, but no discernable effect by the time students neared the end of elementary and secondary education. That means we have no evidence of any lasting effect — by far the most important outcome — and some evidence of short-term effects for students when in grades four and eight. And none of this can be conclusively pinned to NCLB because numerous variables affect outcomes.
This is very consistent with the conclusions reached by the authoritative National Research Council about test-based incentive programs. Even given the gains in math scores of 9-year-olds since 1999, the report concludes that “the overall effects on achievement tend to be small and are effectively zero for a number of [test-based incentive] programs.”
Let’s remember that President Bush did not sell NCLB as able to prompt a few years of small gains for 9-year-olds in math. Instead, the promise was that almost all students would be proficient by 2014—that no child would be left behind. Now that we’re 13 years beyond that promise, it feels as if the Bush Institute’s Dynarski is attempting to move the goal posts about 99 yards.
As we set forth in the NEPC Policy Memo, NCLB did have a large impact on the nation’s classrooms, much of it troubling. The pressure to raise test scores resulted, for instance, in a shift toward test-prep, a shift away from more open-ended inquiry, and a narrowing of the curriculum. Looking just at that last issue, the concern is that NCLB shifted instructional time to math and reading to an unhealthy extent. In the schools where this happened, one would hope that this shift would indeed show up in NAEP math and reading scores. (As noted, there’s some solid research that it did so, at least for those 9-year-olds in mathematics.) But at what cost to other learning? In any case, it would take a powerful set of rose-colored glasses to see these meager results as being a success.
Here’s the part of our Policy Memo that Dynarski is attempting to rebut:
Since NCLB became law in 2002, students may have shown slight increases in test scores, relative to pre-NCLB students. Looking at the results of the National Assessment of Educational Progress (NAEP), however, any test score increases over the pre-NCLB trend are very small, and they are miniscule compared to what early advocates of NCLB promised. We as a nation have devoted enormous amounts of time and money to the focused goal of increasing test scores, and we have almost nothing to show for it. Just as importantly, there is no evidence that any test score increases represent the broader learning increases that were the true goals of the policy—goals such as critical thinking; the creation of lifelong learners; and more students graduating high school ready for college, career, and civic participation. While testing advocates proclaim that testing drives student learning, they resist evidence-based explanations for why, after two decades of test-driven accountability, these reforms have yielded such unimpressive results.
This remains our conclusion.
Dr. Dynarski’s rebuttal does not end there. The format for each of his remaining points is the same: a re-phrased exposition of our “claim” followed by a short rhetorical rejoinder that, with only one exception, fails to cite any supporting source. We discuss each below.
“They assert that annual tests have caused over-testing.”
This is followed by an analogy about speeding and careless driving, the relevance or meaning of which is not entirely clear to us. But Dynarski’s point is then stated as, “getting rid of tests will not solve education’s problems.” On that, we certainly agree. As we explain in the part of the NEPC Policy Memo that mentions over-testing (with emphasis added):
There is now a parent-led backlash against “over-testing,” and politicians in both major parties are paying attention. These parents point to the time spent administering the tests themselves as well as to the diversionary effects of high-stakes testing on curriculum and instruction—which include narrowed curriculum, teaching to the test, and time spent preparing for the high-stakes assessments.[i0]
Nevertheless, the debate in Washington, D.C., largely ignores the fundamental criticism leveled by parents and others: testing should not be driving reform. Often missing this point, many politicians have begun to call merely for reducing or shortening the tests. Some also want to eliminate the federal push to use the tests for teacher evaluation while at the same time leaving untouched the test-driven accountability policies at the center of education reform. Other politicians are less interested in whether testing mandates continue than whether those mandates come from the states or from the federal government.
This kind of tinkering at the margins is just more of the same; the past decades have seen a great deal of attention paid to technical refinement of assessments—their content, details, administration, and consequences. In the words of long-time accountability hawk Chester Finn, “NCLB Accountability is Dead; Long Live ESEA Testing.” But the problem is not how to do testing correctly. In fact, today’s standardized assessments are probably the best they’ve ever been. The problem is a system that favors a largely automated accounting of a narrow slice of students’ capacity and then attaches huge consequences to that limited information.
Testing used as a diagnostic or summary instrument for children’s learning can be a helpful tool. It is harmful, however, to use students’ test scores as a lever to drive educational improvement. This use of testing is ill-advised because, as described below, it has demonstrably failed to achieve its intended goal, and it has potent negative, unintended consequences.
Our reading of Dr. Dynarski’s argument is that he’s among those who favor tinkering in ways that cut back marginally on testing in what he calls “over-zealous” states. But these states are rationally responding to NCLB-related incentives to place testing at the center of their schooling policies and practices.
“Accountability has had negative consequences on teaching.”
Dynarski writes, “It is bold to claim teachers have drained their classrooms of creativity and enthusiasm for learning, and no longer help their students develop their individual potential and grow into responsible adults. Bold claims like this should be backed by evidence.” Of course, we didn’t make that claim, nor would we. But we do point to research showing that schools, principals and teachers are incentivized by NCLB and other test-based accountability policies to shift time and focus away from teaching and learning that is broader and more engaging than we all—teachers, parents and researchers alike—would prefer.
We agree, however, with Dynarski when he writes that “claims like this should be backed by evidence.” That’s why we wrote a research-based analysis that includes references to evidence. The Dynarski rebuttal focuses in particular on the possibility that teacher morale is lowered by test-based accountability policies. This wasn’t a focus of our Policy Memo, but there is in fact research on the issue, suggesting burnout and a drop in morale. There are also, we should note, a growing number of anecdotal accounts of teachers who left teaching because of the test-based accountability push. Nor should we ignore the teachers who lost their jobs and the displaced students who, because of school closures and other test-based sanctions, might offer the subjective opinion that this accountability system does indeed have “negative consequences.”
*They claim that accountability has harmed the nation’s “democratic and economic goals and those of the citizenry.”
The rebuttal falsely states that we “offer no evidence of this claim.” Our statement (along with the citation to a very thorough and thoughtful treatment of the issue) was as follows:
Whether our goals are for citizenship or a well-prepared workforce, the narrowing of curriculum and constraining of instruction is harmful to the nation’s democratic and economic goals and those of the citizenry. [The test focus has had the negative effect of] marginalizing values and skills that help students develop the ability to cooperate, solve problems, reason, make sound judgments, and function effectively as democratic citizens.
*They assert that test scores are limited as a measure of a student’s learning.
Here’s what we wrote in the NEPC Policy Memo:
We stress here that tests are useful when applied to their intended purposes and when there is legitimate evidence to support those purposes. Although measuring outcomes does not directly enrich learning, our schools do need disaggregated and useful information about how schools are serving students. This is an important part of a healthy evaluative feedback loop. The problem is not in the measurements; it is in the fetishizing of those measurements. It is in the belief that measurements will magically drive improvements in teaching and learning. It is in the use of test scores to issue facile admonishments: try harder! teach smarter! retain the child in third grade! reconstitute the school staff! It is in the singular focus on achievement outcomes to the exclusion of focusing on children’s opportunities to learn or on the system’s needs.
Dr. Dynarski objects to this critique. Although he doesn’t directly defend the usefulness of the current test-score based system, he does ask questions about alternatives. Here’s what he writes:
They do not put forward an alternate measure, one that is informative to parents and comparable between schools and districts. Are they arguing that what it means to learn is so vague and multi-faceted that it can’t be measured? If so, they should follow their logic and call for an end to teachers giving students grades and an end to states having standards of what students should know.
There’s a conflation here of two different types of evaluations: classroom evaluations used for grades and report cards, versus evaluations of teachers, schools, etc based on students’ scores on high-stakes standardized tests. But we would agree with the state that student learning is in fact so multifaceted that it makes more sense to turn to their teachers for grades rather than to rely on a superficial testing structure. Those grades have been shown to be a better predictor of students’ college success than are SAT scores, for example. We do want to be clear, however, that our NEPC Policy Memo does not address the usefulness of standards, which when developed collaboratively with educators can be beneficial.
As for alternatives, the Policy Memo points readers to, among other ideas, universal accountability systems and the indicators framework developed by Brown University’s Annenberg Institute for School Reform. NEPC has also published Policy Briefs that examine the Inspectorate model and consider new and creative ways to shape data-driven accountability. Others have offered additional ideas, such as community-based accountability. These ideas illustrate that we do not need to be locked in to our current test-driven accountability systems.
*They assert that the law is all about “punitive interventions.”
“It is curious,” Dynarski writes, “to apply the label of “punitive” to using test results to identify which schools need improvement.” Just to be clear, identifying schools for improvement definitely is not punitive; moving Title I funding out of the school, forcing closures, etc.—that’s what punitive. We would also describe as punitive the post-NCLB focus, during the Obama administration, on using students’ test scores for high-stakes evaluations of principals and teachers.
“They claim education needs more resources.”
This is the final point and is accompanied by the warning or threat that if we “eviscerate accountability [then] taxpayer generosity may dry up.” Dynarski writes, “Education spending rose steadily until the economic crisis. It’s now rising again. … There are always plenty of ideas and enthusiasm for spending more money, but let’s at least note that a lot already is spent.” In truth, funding adequacy and funding fairness were both hard hit by the Great Recession.
Unfortunately, we as a society have consistently underfunded our neediest children, and this inequality has been readily apparent for at least a half-century. While Dr. Dynarski doesn’t really deny that, he doesn’t seem pleased that our Policy Memo points it out, nor does he see why or how test-focused accountability systems might shift policy attention away from a focus on resources and opportunities.
In truth, we can’t be sure that policymakers will turn to an opportunity-to-learn agenda if they move away from a testing agenda. But we do know that the testing agenda is bankrupt and has coincided with a particularly discouraging period of time for opportunity advocates. As we concluded in the NEPC Policy Memo:
The ultimate question we should be asking isn’t whether test scores are good measures of learning, whether growth modeling captures what we want it to, or even whether test scores are increasing; it is whether the overall impact of the reform approach can improve or is improving education. Boosting test scores can, as we have all learned, be accomplished in lots of different ways, some of which focus on real learning but many of which do not. An incremental increase in reading or math scores means almost nothing, particularly if children’s engagement is decreased; if test-prep comes at a substantial cost to science, civics, and the arts; and if the focus of schooling as a whole shifts from learning to testing.
The way forward is not to tinker further with failed test-based accountability mechanisms; it is to learn from the best of our knowledge. We should not give up on reaching the Promised Land of equitable educational opportunities through substantially improved schooling, but we must study our maps and plan a wise path. This calls for a fundamental rebalancing—which requires a sustained, fair, adequate and equitable investment in all our children sufficient to provide them their educational birthright, and an evaluation system that focuses on the quality of the educational opportunities we provide to all of our children. As a nation, we made our greatest progress when we invested in all our children and in our society.
(You can find the fully annotated memo here, at the National Center of Education Policy website.)
 “Open Letter to Congress and the Obama Administration from Educational Researchers Nationwide.” Retrieved February 22, 2015 from http://tinyurl.com/kt897nm.
 NAEP 2012: Trends in Academic Progress (2014). Washington DC: US Department of Education. Retrieved February 22, 2015 from http://nces.ed.gov/nationsreportcard/subject/publications/main2012/pdf/2013456.pdf
 Lee, J. & Reeves, T. (June 2012). Revisiting the impact of NCLB high-stakes school accountability, capacity and resources: State NAEP 1990-2009 reading and math achievement gaps and trends. Education Evaluation and Policy Analysis, 34(2), 209-231.
 The U.S. Department of Education allowed states to set their “Adequate Yearly Progress” targets so that they were backloaded, meaning that most of the progress would not be expected until after 2008 or so. See Linn, R. L. (2003). Accountability: Responsibility and Reasonable Expectations. CSE Report 601. National Center for Research on Evaluation, Standards, and Student Testing.
Retrieved February 22, 2015 from http://www.cato.org/publications/testimony/has-no-child-left-behind-worked.
 National Research Council (2011). Incentives and Test-Based Accountability in Education. Committee on Incentives and Test-Based Accountability in Public Education, M. Hout and S.W. Elliott, Editors. Board on Testing and Assessment, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. Retrieved February 11, 2015, from
 Center on Education Policy (2005). NCLB: Narrowing the Curriculum. NCLB Policy Brief 3. Retrieved February 22, 2015 from http://www.cep-dc.org/displayDocument.cfm?DocumentID=239.
 Dee, T. S., & Jacob, B. A. (2010). The impact of No Child Left Behind on students, teachers, and schools. Brookings Papers on Economic Activity, 149-20. Retrieved February 22, 2015 from http://www.brookings.edu/~/media/Projects/BPEA/Fall%202010/2010b_bpea_dee.PDF
 Lee, J. & Reeves, T. (June 2012). Revisiting the impact of NCLB high-stakes school accountability, capacity and resources: State NAEP 1990-2009 reading and math achievement gaps and trends. Education Evaluation and Policy Analysis, 34(2), 209-231;
National Research Council (2011). Incentives and Test-Based Accountability in Education. Committee on Incentives and Test-Based Accountability in Public Education, M. Hout and S.W. Elliott, Editors. Board on Testing and Assessment, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. Retrieved February 11, 2015, from http://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education.
Lee, J. (2006). Tracking achievement gaps and assessing the impact of NCLB on the gaps: An in-depth look into national and state reading and math outcome trends. Cambridge, MA: The Civil Rights Project, Harvard University.
 Nelson, H. (2013). Testing More, Teaching Less: What America’s Obsession with Student Testing Costs in Money and Lost Instructional Time. Washington DC: American Federation of Teachers. (Finding that “the time students spend taking tests ranged from 20 to 50 hours per year in heavily tested grades. In addition, students can spend 60 to more than 110 hours per year in test prep in high-stakes testing grades.”) Retrieved February 7, 2015, from http://www.aft.org/sites/default/files/news/testingmore2013.pdf;
Garcia, N. (February 18, 2014). Survey: Colorado teachers say there’s too much testing. Colorado Chalkbeat. (“Teachers said they spend at least 50 of 180 days during the academic year administering state and district tests, with language arts specialists spending the most time on mandated assessments.”) Retrieved February 7, 2015, from http://co.chalkbeat.org/2014/02/18/survey-colorado-teachers-say-theres-too-much-testing;
Rogers, J., & Mirra, N. (2014). It’s about time:Learning time and educational opportunity in California high schools. UCLA: UCLA Institute for Democracy, Education, and Access. (Finding that test prep time was about eight days in lower-income schools and about four days in wealthier schools; p. 13). Retrieved February 7, 2015, from http://idea.gseis.ucla.edu/projects/its-about-time/Its%20About%20Time.pdf;
McMurrer, J. (2008). NCLB Year 5: Instructional time in elementary schools: A closer look at changes for specific subjects. Washington DC: Center on Education Policy. Retrieved February 7, 2015, fromhttp://www.cep-dc.org/displayDocument.cfm?DocumentID=309;
Crocco, M. S., & Costigan, A. (2007).The narrowing of curriculum and pedagogy in the age of accountability: Urban educators speak out. Urban Education, 42(6), 512-535.
 Finn, C. E. (February 6, 2015). NCLB accountability is dead; long live ESEA testing (blog post). Education Next. Retrieved February 7, 2015, from http://educationnext.org/nclb-accountability-dead-long-live-esea-testing.
 See the discussion of “consequential validity” in:
Welner, K. G. (2013). Consequential validity and the transformation of tests from measurement tools to policy tools. Teachers College Record, 115(9), 1-6.
 One careful study was unable to measure an NCLB impact on morale: Grissom, J. A., Nicholson-Crotty, S., & Harrington, J. R. (2014). Estimating the Effects of No Child Left Behind on Teachers’ Work Environments and Job Attitudes. Educational Evaluation and Policy Analysis, 36(4), 417-436.
Other research does suggest a negative effect or association. See:
Rubin, D. I. (2011). The disheartened teacher: Living in the age of standardisation, high-stakes assessments, and No Child Left Behind (NCLB). Changing English, 18(4), 407-416.
Smith, J. M., & Kovacs, P. E. (2011). The impact of standards‐based reform on teachers: the case of ‘No Child Left Behind’. Teachers and Teaching: theory and practice, 17(2), 201-225.
Dworkin, A. G., & Tobe, P. F. (2014). The Effects of Standards Based School Accountability on Teacher Burnout and Trust Relationships: A Longitudinal Analysis. In Trust and School life (pp. 121-143). Springer Netherlands.
 See, for example, this list compiled in mid-2013 of public resignation announcements: http://teacherunderconstruction.com/2013/05/26/updated-list-of-public-teacher-resignations/
 Schoen, L., & Fusarelli, L. D. (2008). Innovation, NCLB, and the fear factor: The challenge of leading 21st-century schools in an era of accountability. Educational Policy, 22(1), 181-203.
For a discussion of 21st Century Skills, see http://www.imls.gov/about/21st_century_skills_list.aspx.
 Howe, K. & Meens, D. (2012). Democracy left behind: How recent reforms undermine local school governance and democratic education. Boulder, CO: National Education Policy Center. Retrieved February 22, 2015 from http://nepc.colorado.edu/publication/democracy-left-behind.
 Soares, J. A. (2012). SAT wars: The case for test-optional college admissions. New York: Teachers College Press.
 Standards are not the same thing as a high-stakes testing and accountability system built up around those standards. See Taylor, G., Shepard, L., Kinner, F. & Rosenthal, J. (2001). A Survey of Teachers’ Perspectives on High-Stakes Testing in Colorado: What Gets Taught, What Gets Lost. Boulder, CO: Center for Research on Evaluation, Standards, and Student Testing & Center for Research on Evaluation, Diversity, and Excellence. Retrieved February 22, 2015 from http://nepc.colorado.edu/files/Cosurvey.pdf.
 Ryan, K.E., Gandha, T., & Ahn, J. (2013). School Self-evaluation and Inspection for improving U.S. Schools? Boulder, CO: National Education Policy Center. Retrieved February 22, 2015 from http://nepc.colorado.edu/publication/school-self-evaluation.
 Hargreaves, A. & Braun, H. (2013). Data-Driven Improvement and Accountability. Boulder, CO: National Education Policy Center. Retrieved February 22, 2015 from http://nepc.colorado.edu/publication/data-driven-improvement-accountability
 Vasquez Heilig, J., Ward, D.R., Weisman, E. & Cole, H. (2014). Community-Based School Finance and Accountability: A New Era for Local Control in Education Policy? Urban Education, 49(8), 871-894.
 Baker, B. D., Sciarra, D. G., & Farrie, D. (2014). Is School Funding Fair? A National Report Card. Education Law Center. Retrieved February 22, 2015 from http://www.schoolfundingfairness.org.
Baker, B. D. (2014). Evaluating the recession’s impact on state school finance systems. Education Policy Analysis Archives, 22(91). Retrieved February 22, 2015 from http://epaa.asu.edu/ojs/article/view/1721/1357.