I’ve written about some of the problems with new standardized tests given to students in New York — including mistakes with scoring for gifted and talented programs, concerns about product placementÂ inÂ test questions and badly written questions and possible answers — but here is a piece on a different issue surrounding the exams and the company that designed them, Pearson. This was written by Kathleen Porter-Magee is aÂ Bernard Lee Schwartz Policy Fellow at the Thomas B. Fordham Institute and the editor of the Common Core Watch blog, where this post first appeared. Previously, she served as a middle school and high school teacher and as the founding curriculum and professional development director for Achievement First, a network of public charter schools. She can be found on Twitter at @kportermagee.
By Kathleen Porter-Magee
Itâs testing season in New York, which can mean only one thing: Itâs open season on Pearson, the corporation everyone loves to hate. But this time, when they crossed a serious line, far too many state leaders and reformers are holding their fire.
To date, most of the anti-Pearson ire has been focused on a calculation error that led 5,000 New York City students to be incorrectly told that theyÂ didnâtÂ qualify for the cityâs Gifted and Talented program. Sloppy, no doubt, but not corrupt. (The error has since been corrected, and all qualified students are now eligible.)
But there is a far more serious transgression that has gotten very little attention, and itâs one that threatens the validity of the English Language Arts (ELA) scores for thousands of New York students and raises serious questions about the overlap between Pearson’s curriculum and assessment divisions.
TheÂ New York PostÂ andÂ Daily NewsÂ reported that the Pearson-developed New York State ELA sixth- and eighth-grade assessments included passages that were also in a Pearson-created, âCommon Coreâalignedâ ELA curriculum. This meant that students in schools that purchased and used instructional materials from Pearson had an enormous advantage over those who didnât.
Predictably, reform critics pounced on the announcement. Leonie Haimson, one of New Yorkâs loudest and most outspoken education reform opponents, argued, âThe state should be obligated to throw out every item on the exams based on passages in Pearson textbooks assigned elsewhere in the state.â
Also predictably, the testing giant dismissed the overlap as an immaterial consequence of a standards environment that demands an âemphasis on using nonfiction texts in the exams.â In other words, their take is that, as long as the questions were different, the duplication of a passage doesnât matter.
Whatâs most troubling, though, is that officials at the New York Department of Education are equally nonplussed. According to theÂ Post, when questioned about the discovery, department spokesman Tom Dunn proclaimed that such overlap was going to happen. âThe alternative,â he explained, âwould be to exclude many authors and texts that are capable of supporting the rigorous analysis called for by the Common Core.â
This reaction is particularly surprising coming from New York, where officials have been investing enormous time and money into developing curriculum materials, independent of the Pearson publishing machine, that are high quality and faithfully aligned to the Common Coreâwork that will inevitably be undermined if Pearson continues to link the NY assessment directly toÂ theirÂ curriculum.
But does reading a passage in advance of a test really give some students an advantage over others? Surely the students didnât memorize the passage. And Pearson representatives assured that the questions on the test were different from the questions in the curriculum. So, whatâs the problem?
A lot, actually. Thatâs because a test of reading comprehension isnât just measuring a series of decontextualized skills. As E.D. Hirsch has long argued, reading comprehension tests are actually assessments of student background knowledge as much as anything else. In fact, as Hirsch and Robert Pondicsio argued in a must-read piece in theÂ American Prospect from 2010,
Even simple texts, like those on reading tests, are filled with gaps — presumed domain knowledge — that the writer assumes the reader knows.âŚ Researchers have consistently demonstrated that in order to understand what you’re reading, you need to know something about the subject matter.
They go on to explain, âStudents who are identified as âpoor readersâ comprehend with relative ease when asked to read passages on familiar subjects, outperforming even âgood readersâ who lack relevant background knowledge.â
That means that students who read the Pearson test before seeing it on the state test had the opportunity to fill the gaps in their own knowledgeâwhether through class discussion or simply by reading and answering the questions provided in the curriculumâbefore they took the test. And that means that the validity of a test that aims to differentiate between âgoodâ and âpoorâ readers is necessarily called into question.
Unfortunately, it seems that New York education officials donât realize how significant this problem is. Or even that it is a problem. (Meryl Tisch, New York Board of Regents chancellor, actually defended the quality of the assessments, boasting that, thanks to a rigorous new quality-control review, the Department of Education had avoided the kinds of problems that lead to last yearâs now-famous pineapple scandal. And that failure to recognize what may be a far more serious and consequential challenge may be the biggest red flag that Common Core assessment decisions are in trouble in the Empire State.
As for Pearson, itâs no stranger to these kinds of conflict-of-interest accusations. In the U.K., Pearson both administers a state âA-Levelâ qualifying examâthe results of which are used to inform, among other things, university admissionsâand sells textbooks aimed at helping students prepare for those assessments. Last November, U.K. officials launched an investigation into âpossible conflicts of interest within its role as both a publisher of textbooks and an issuer of academic qualifications.â
It’s a textbook (pardon the pun) anti-trust scenario: By developing both the test and curriculum materials, Pearson will basically control the market, regardless of the quality of their materials. After all, if you were a New York principal and learned that Pearson included passages from their curriculum on the state testâthe results of which are used to inform everything from student to teacher to school accountabilityâwhose curriculum would you buy?
Given the importance of statewide assessment to standards- and accountability-driven reform, there is little room for error. Reform advocates need to be vigilant in ensuring that standards-aligned tests are rigorous and valid. And that means taking a much harder look at the relationship between test development and curriculum developmentâand perhaps taking the time to learn lessons from the missteps of our fellow reformers across the pond.