The graduation caps have been thrown, and summer bliss beckons. The SAT is nothing but a hazy, horrible memory — which is perhaps why the College Board chose this month to announce its latest tweak to the test from hell.
This month, the board revealed that it had been experimenting with an “Environmental Context Dashboard,” an extra batch of data that will be delivered to admissions officers alongside students’ raw, out-of-1600 scores on the SAT. The board has been field-testing the dashboard at 50 colleges and universities, with plans to roll out the tool to 150 more this fall and then more widely in 2020.
The dashboard displays a great deal of information, from the average SAT and AP test performance at a student’s high school to the crime levels in their neighborhood, all compared to national norms. But it has attracted criticism for also providing colleges with what the board calls an “overall disadvantage level,” a single number (ranging from 1 to 100) calculated using 15 factors related to students’ relative high school, neighborhood and family environment. Fifty is set as the average, and higher numbers indicate a student applicant comes from a more-challenging-than-average environment. The resulting score will be reported to admissions officers only, not to the test-taker. And the number doesn’t take into account any of the test-taker’s personal characteristics, just their environment.
The Wall Street Journal quickly dubbed it an “adversity score,” a name that the College Board rejected to little effect. In an already overheated debate about privilege, disadvantage, and victimhood vs. merit, any phrase that helps conjure more outrage is likely to stick.
But outrage directed at the new score is misplaced.
The new tool is confirmation from the College Board that the SAT has failed as a holistic measure of college-worthiness. But it also suggests that gatekeepers to a college degree are finally willing to acknowledge that access to higher education is far from equal, and are looking for creative ways to open their doors.
SAT was originally an acronym for “Scholastic Aptitude Test,” but the longer title was dropped in the mid-1990s as it became apparent that “aptitude” was neither immutable nor innate, and that the markers of intelligence it depended on were often skewed by gender, race and class. The acronym became a trademark, nothing more, and the board says today that the test is meant only to measure a core of reading and math skills.
Today, what the SAT seems best at predicting is wealth. Students in the top 5 percent of income score an average of 388 points higher than those whose families are in the bottom 20 percent. (Out of 1600 total points, the wealth gap represents nearly a quarter of the differential.) Students whose parents make more than $200,000 a year will do better than those whose parents make $160,000 — the correlation is that close. And that alone may explain why the median SAT scores at top colleges lie above the 95th percentile.
The College Board’s new Environmental Context data is meant to give admissions officers reason to give a second look to students who may have a lower SAT score but one that is outstanding relative to the disadvantages they face, indicating particular resourcefulness or grit. But what would be more effective, and important, would be to put more effort into doing away with such hobbles altogether. What if schools were more equally funded? What if a student’s educational opportunity was divorced from his family’s real estate assets?
The graduates of the top 200 elite high schools make up a full third of the student body at the most prestigious colleges: the Ivy League, Stanford, MIT. These schools also tend to be white and wealthy, the ones left standing after a generation of disinvestment in secondary public education that has been driven by racial self-segregation and poverty. Giving less-obvious applicants a chance is well and good, but real equity will take more than an end-stage score adjustment.
The real question is not whether the colleges should take environmental context into account when looking at test scores (they should), or whether administrators will find this particular method of data collection helpful (they do, according to admissions officers from Yale, Trinity University and elsewhere who took part in the field trials). Instead, it is whether this new way of scoring will be used as an opportunity or as an excuse. It’s helpful to define existing barriers to opportunity and attempt to account for them in the context of college selection, but it would be far more so to address the barriers where they stand.