He’s right about that. But his administration is doing some denying of its own — refusing to accept extensive research on the proper use of standardized tests.
The president and his education secretary, Arne Duncan, have for years been using student standardized test scores to hold students, teachers and principals “accountable” even though assessment experts say they aren’t reliable enough to be used for that purpose. Assessment experts say that tests should be used only for the purpose for which they were designed and nothing else, yet the administration keeps finding additional ways to use standardized test results in ways that are questionable.
Earlier this week, Duncan announced that the administration was tightening its oversight of states in regard to how they educate special-needs students, applying more stringent criteria. From now on, the department will not only consider whether proper procedures are being conducted but on outcomes, including how well these students score on standardized tests and the achievement gap, based on test scores, between students with and without disabilities.
How well special education students perform on a test called the National Assessment of Educational Progress, or NAEP, will be one of the factors considered. This marks the first time that NAEP scores have been attached to any education policy that has potential consequences; the Education Department could withhold federal funds to states that don’t comply with the new special education regulations, though officials there said that is not something they want to do. But NAEP, a test given every two years to a nationally representative sampling of students, wasn’t designed for this purpose. When asked by reporters about whether using NAEP for this purpose was turning it into a high-stakes test, Duncan said, “I wouldn’t call it high stakes.” He said his department was using NAEP because, however “imperfect,” it was the “only accurate measurement we have.”
Furthermore, the department will judge states on how many special education students actually take NAEP. In some states, big percentages of students with disabilities are excluded; in Maryland, for example, 66 percent of special needs students were excluded from the fourth-grade reading test.
Apparently, the department believes that more testing will help special education students achieve more in school. But since No Child Left Behind started, the standardized test-based “accountability” era more than a dozen years ago, there has been no evidence to show that standardized tests have improved student achievement, or that linking test scores to teacher evaluations has created better teachers.
Special education isn’t the only area that the Education Department has new standardized testing plans. Duncan announced earlier this month that the department was going to reform the U.S. Bureau of Indian Education.
Nobody would argue that the agency, responsible for the education of tens of thousands of American Indian students, isn’t over-ripe for reform: The agency has had 33 directors in the last 35 years and student outcomes in the education programs and residential facilities for Indian students that it supports are awful. Bureau director Charles Roessel admitted as much at a Senate hearing last month:
The BIE supports education programs and residential facilities for Indian students from federally recognized tribes at 183 elementary and secondary schools and dormitories. Currently, the BIE directly operates 57 schools and dormitories and tribes operate the remaining 126 schools and dormitories through grants or contracts with the BIE. During the 2013-2014 school year, BIE-funded schools served approximately 48,000 individual K-12 American Indian and Alaska Native students and residential boarders. Approximately 3,800 teachers, professional staff, principals, and school administrators work within the 57 BIE-operated schools. In addition, approximately twice that number work within the 126 tribally-operated schools….The BIE faces unique and urgent challenges in providing a high-quality education. These challenges include: difficulty attracting effective teachers to BIE-funded schools (which are most often located in remote locations), the current Interior regulatory requirement that BIE-funded schools comply with the (23 different) states’ academic standards in which they are located, resource constraints, and organizational and budgetary fragmentation. A lack of consistent leadership — evidenced by the BIE’s 33 directors since 1979 — and strategic planning have also limited the BIE’s ability to improve its services. Furthermore, over the years, federal American Indian education has been contracted or granted to tribes in approximately two-thirds of the BIE school system, but the BIE’s management structure and budget have not evolved to match the BIE’s long-term trajectory of increased tribal control over the daily operation of schools. Currently, the Department is funding approximately 67 percent of the need for contract support costs for tribally-controlled schools. Each of these challenges has contributed to poor outcomes for BIE students.
But will grant competitions and standardized test-based evaluations of teachers actually help? That’s what the administration said it wants to do: Initiate efforts that are very similar to the Race to the Top contest for federal K-12 education funding that required state competitors to promise to make specific Duncan-approved reforms, including linking teacher evaluation to test scores. Does the administration really think that controversial evaluations will entice more teachers to schools that are already facing teacher shortages?
For years assessment experts have been warning about misusing standardized tests. They have said that a popular way of linking student test scores to teacher evaluations, known as “value-added measures” or VAM, is not valid or reliable. In fact, the American Statistical Association, the largest organization in the United States representing statisticians and related professionals, said in an April report that value-added scores “do not directly measure potential teacher contributions toward other student outcomes” and that they “typically measure correlation, not causation,” noting that “effects — positive or negative — attributed to a teacher may actually be caused by other factors that are not captured in the model.”
A 2009 warning by the Board on Testing and Assessment of the National Research Council of the National Academy of Sciences stated that “VAM estimates of teacher effectiveness should not be used to make operational decisions because such estimates are far too unstable to be considered fair or reliable.” The Educational Testing Service’s Policy Information Center has said there are “too many pitfalls to making causal attributions of teacher effectiveness on the basis of the kinds of data available from typical school districts,” and Rand Corp. researchers have said that VAM results “will often be too imprecise to support some of the desired inferences.”
These are just a few of the many reports on this issue, yet Duncan and Obama keep right on denying this extensive research. I recently asked the Education Department what Duncan thinks of all this research, and department spokesman Dorie Nolt said in an e-mail, reflecting Duncan’s position:
“Including measures of how well students are learning as part of multiple indicators of educator effectiveness is part of a set of long-needed changes that will improve classroom learning for kids. Growth measures are a significant improvement over the system that existed before, which failed to produce useful distinctions in teacher performance. Growth measures — including value-added measures — focus attention on student learning and show progress. While these measures are better than what existed before, educators will continue to improve them, and sharp, critical attention from the research community can help.”
As to whether Duncan is aware of the latest research, she said:
We keep track of all major research on this topic.
They keep track of it, but they choose not to believe it. Who does that sound like?