DIBELS, or Dynamic Indicators of Basic Early Literacy Skills, is a set of procedures and measures developed at the University of Oregon for assessing literacy development in students from kindergarten through sixth grade. The DIBELS website says that the measures — one-minute fluency exercises — were “specifically designed to assess the five early literacy components: Phonological Awareness, Alphabetic Principle, Vocabulary, Comprehension, and Fluency with Connected Text,” but critics say its validity is very weak. (Here’s an extensive critique.) Nonetheless, DIBELS has become widely used in schools around the country since 2001 — reaching some 2 million children a year. In this post, Rachael Gabriel, an assistant professor of reading education in the Neag School of Education at the University of Connecticut, write about continuing problems with DIBELS and how struggling readers are affected.
By Rachael Gabriel
Connecticut has passed legislation that includes new requirements for diagnostic screening tools for reading in kindergarden through the third grade. Word on the street is that the new requirements align well with one assessment in particular: DIBELS, or Dynamic Indicators of Basic Early Literacy Skills, an early literacy assessment used in over 15,000 schools nationwide, including many in Connecticut. Why is this a problem?
DIBELS often labels thoughtful readers as needing “intensive remediation” by only considering a reader’s speed and accuracy, in the same way that Body Mass Index (BMI) often misses the boat by, for example, labeling Brad Pit (circa Fight Club years) and England’s entire rugby team “obese” by only considering the ratio of height to weight.
Because DIBELS measures awareness of letter sounds by asking kids to read nonsense words, students who change nonsense words into real words in an effort to make them make sense are often categorized as in need of “intensive” remediation.
Because DIBELS measures progress by the number of words students can read in 60 seconds, students who self-monitor for meaning by slowing down, or those who reread to ensure understanding, are often categorized as in need of “intensive” remediation.
Because DIBELS measures comprehension by the number of words students say when retelling a story, students who are more succinct or simply leave out filler words (because they understand what a summary retell should be) are categorized needing “intensive” remediation for their sophisticated efforts towards comprehension.
Because DIBELS can determine everything from student grouping to teacher evaluation ratings, instructional time is likely lost to a focus on contrived assessment tasks rather than reading thoughtfully for meaning.
This is bad news for students who struggle with reading.
The main benefit of DIBELS (and BMI for that matter) is its efficiency. All alternatives to DIBELS require some teacher judgment and more than 60 seconds, so they don’t meet the state’s new criteria for reading screening tools, which must:
1. Measure phonics, phonemic awareness, fluency, vocabulary and comprehension. (That’s a lot for any single test. DIBELS only measures comprehension and
vocabulary only by proxy.)
2. Provide opportunities for periodic formative assessment during the school year. (It’s hard to give a comprehensive assessment more than 2-3 times per year. DIBELS can be used repeatedly since it only takes 60 seconds.)
3. Produce data that is useful for informing individual and classroom instruction. (Comprehensive assessments usually compare students to themselves, not each other. DIBELS can inform grouping and measure progress over time since it has little to do with actual reading.)
4. Be compatible with current best practices in reading instruction and research. (This would require an emphasis on meaning, motivation and engagement, not the often meaning-free tasks of DIBELS.)
So, DIBELS doesn’t meet the state’s new criteria either. Nothing does. As much as we’d love a foolproof, efficient, standardized and valid assessment of reading, that takes less than a minute to administer, it doesn’t exist. We are left with the best-we-have-but-not-that-great logic that underlies everything from standardized tests of academic achievement to BMI.
Is DIBELS one of the best we have for rapidly screening for reading difficulty? Yes, unfortunately.
Is it potentially dangerous because of the way it defines reading for the purposes of assessment? Yes, definitely.
So what are we to do? Proceed with caution.
Supporters of DIBELS argue that it is only ever to be used as a single indicator. But, the last decade of standardized testing has convincingly demonstrated that what is tested will be taught: exactly how it is tested. Similarly, what is suggested by the glossy, colored charts assessment data DIBELS software automatically generates, will be believed.
Just like a 20-minute consult with a doctor is always better than health advice from an online calculator, a one-on-one conference with a teacher or reading specialist will always be better than DIBELS at diagnosing and understanding reading difficulty, ability and progress.
Though I have no doubt that DIBELS will continue to be used for all its glossy efficiency, adults and children must be continually reminded that it should never be the guiding force behind reading instruction. Adults should know better than to rely on DIBELS because children deserve better than an education built around timed tests of nonsense and rapidly read words.