(iStock)

It’s called “competency-based learning” and its the newest thing in education. What is it? Who likes it? Who doesn’t and why?

On its face, competency-based learning sounds good. Students learn material and move on when they have mastered the material, going at their own pace. But how exactly do students get this sort of education and what are the consequences? Veteran educator Anthony Cody writes in the following primer that competency-based education is fundamentally a way to push kids onto computers to learn — and to take test after test to prove their “competencies.”

Cody worked in high-poverty schools of Oakland, California, for 24 years, 18 of them as a middle school science teacher. He was one of the organizers of the Save Our Schools March in Washington, D.C. in 2011, and he is a founding member of the nonprofit Network for Public Education. A graduate of the University of California Berkeley and San Jose State University, he now lives in Mendocino County, California. This appeared on his worthwhile Living in Dialogue blog, and he gave me permission to republish it.

By Anthony Cody

We have been badgered for the past 14 years by reformers insisting on the fierce urgency of change, and they have had their way — twice! First, seven years of the test-centric No Child Left Behind, followed by the past seven years of Race to the Top, and now the “next generation” of tests, which were promised to be “smarter,” computer-adapted, and deliver results more quickly. None of it worked. Scores on the independent National Assessment of Educational Progress tests are flat or down. The SBAC and PARCC Common Core-aligned tests are more difficult without being any “smarter” in telling us about what our students can do. The idea that these tests could somehow promote and measure creativity and critical thinking is debunked. The growing “opt out” movement poses a huge threat to the standardized testing “measure to manage” paradigm.

So what is to be done?

Reinvent the tests once again, using technology. And who better for the job than Tom Vander Ark, formerly of the Gates Foundation, and now associated with a long list of education technology companies. The latest package of solutions is being called “competency-based learning,” and it was featured prominently in the Department of Education’s latest “Testing Action Plan.”

Here is how Vander Ark frames the challenge:

Jobs to be done. To get at the heart of value creation, Clayton Christensen taught us to think about the job to be done. Assessment plays four important roles in school systems:

  1. Inform learning: continuous data feed that informs students, teachers, and parents about the learning process.
  2. Manage matriculation: certify that students have learned enough to move on and ultimately graduate.
  3. Evaluate educators: data to inform the practice and development of educators.
  4. Check quality: dashboard of information about school quality particularly what students know and can do and how fast they are progressing

Initiated in the dark ages of data poverty, state tests were asked to do all these jobs. As political stakes grew, psychometricians and lawyers pushed for validity and reliability and the tests got longer in an attempt to fulfill all four roles.

With so much protest, it may go without saying but the problem with week long summative tests is that they take too much time to administer; they don’t provide rapid and useful feedback for learning and progress management (jobs 1&2); and test preparation rather than preparation for college, careers, and citizenship has become the mission of school. And, with no student benefit many young people don’t try very hard and increasingly opt out.

Note that the source used to define the phrase “jobs to be done” is Clayton Christensen, who has popularized the business concept of “disruptive innovation,” which is the main framework used by “innovators” like Vander Ark.

So what is “competency-based learning”? Here is Vander Ark’s description:

For states ready to embrace personalized and competency-based learning, CompetencyWorks, an online community and resources supported by iNACOL, outlines five components of competency-based education (CBE):

The definition sets a high bar by requiring well stated learning targets, powerful learning experiences, better reporting systems, and new rules for matriculation management. It focuses primarily on the first two jobs: student learning and progress management.

And central to the model (though not stated above) is that this process is managed through technology. Students are given tasks and assignments to complete on computers, which perform the “formative assessments.”

This is explained more clearly in the links at the bottom of Vander Ark’s post. This post, Path to Personalization: Better Models & Better Tests, describes the new tests that are envisioned. This references work by Gene Wilhoit and Linda Darling-Hammond, which I addressed in this post a year ago. He writes:

The biggest opportunity is for assessment frameworks that support competency-based learning sequences. Districts and networks of schools could develop assessment systems that according to iNACOL, “Measure individual student growth along personalized learning progressions,” and, “Use multiple measures of learning, including performance-based assessments.” A state could create an innovation zone to pilot the use of on-demand (or frequently scheduled) end of course demonstrations of learning to manage student progress thus reducing the need for end of year exams.

In this post, “Teachers Deserve Better Tools for Tracking Subskills,” Vander Ark makes it clear that teachers will be required to manage more data than ever.

To boost student engagement and simply stakeholder reporting, the solutions should be, as Michael Fullan suggests, “irresistibly engaging” for students and “elegantly efficient” for teachers. Students should be able to log into a mobile application and quickly understand what they need to learn and options for demonstrating mastery. Teachers should be able to efficiently monitor progress, benefit from informed recommendations and dynamic scheduling, and pinpoint assistance for struggling students.

Of course, no system of assessment would be complete without formative assessments:

Digital learning and the explosion of formative data means the beginning of the end of week long state tests. By using thousands of formative observations it will be increasingly easy to accurately track individual student learning progressions. But making better use of the explosion of formative data will require leadership and investment.

This new vision for accountability does include room for juried portfolios of student work. Here is what Vander Ark suggests:

The test based options tend to be more reliable while the student work product approaches are more valid and authentic. A jurying process for portfolios can boost reliability but adds cost and complexity. A state could combine both approaches by requiring a series of several ACT tests (Plan, Explore, Compass) and incorporating them into a body-of evidence or complication of assessments approach. A state could also combine short end of course exams with a body of evidence approach to gain affordable validity and reliability.

So where does this lead us? We have the test makers defining concepts for students to learn, which are clearly delineated so the learner and the teacher know precisely what they are accountable for. We have frequent “formative assessments” built into assignments that students complete on computers, to be checked by those computers, with tagged data provided to teachers (and presumably to those tasked with supervising teachers.)

There are two unwritten assumptions that are constant from the beginning of NCLB and carry through to this new version. Teachers are not trusted to make judgments about what students learn, how they learn it, or how learning is assessed. Assessment is defined as the external monitoring of the work inside the classroom. The second assumption is that data and technology must be instrumental in whatever process is devised. The main innovation here is the more thorough and intrusive penetration of the classroom via computers capable of monitoring learning.

Both of these assumptions are unsupported by any evidence or track record, in terms of their ability to enhance learning.

The flat or declining NAEP scores demonstrate that external accountability systems have failed to lift performance. Repeated experiments with technology-based instruction have failed to show any advantage. Virtual charter schools, the ultimate extension of this model, have been shown to be virtually useless.

There IS a track record for juried portfolios, such as those in use at New York’s Performance Standard Consortium schools. I visited one of these schools last winter, and heard how the teachers there work together to define course objectives, and then help their students prepare portfolios demonstrating their achievements. This is authentic work, driven by the teachers, not by some external body. This is the one bright spot in Vander Ark’s vision. But note that it does not require either external oversight, nor technology. For that reason, it is rather overshadowed by all the other elements of competency based learning, and I am not sure how it would survive in the computer-managed environment Vander Ark describes.

The essential feature of our current accountability paradigm is its lack of trust in teachers. This suits those who wish to “disrupt” education quite well, because they can come up with one “innovation” after another, and as each one disappoints, they can innovate away again, and every time there is a new status quo to be disrupted and replaced, and a new product to be sold.

As Myron Atkin reminded us, the feature that makes formative assessment work also makes it NOT work when it is packaged and sold.

Formative assessment, so defined, is a pivotal element of everyday classroom teaching. It occurs throughout the school day. It requires collaborative involvement of both teacher and student. And it isn’t something purchased from a vendor that can be used in an identical fashion anywhere, like an instruction book or a cooking recipe.

He goes on to explain:

The key benefits of formative assessment emphasized in the research literature are associated with changes in the classroom that result when teachers and students collaborate closely in examining the quality of student work. What does quality look like? What might the student do to improve school work to bring it to a higher quality than it is right now? This integration of teaching, learning, and assessment is complex work, but potent. It takes time and effort: hours, days, weeks, and months – not the periodic 15 or 20 minutes needed to respond to questions purchased from a remote “item bank” developed by the testing companies to foreshadow the final examination. Reporting mini-test scores to the students and even discussing common incorrect answers has little relationship to the type of feedback studied by Black and Wiliam that produced such large gains in achievement.

This sort of formative assessment also takes expertise on the part of teachers. The externalization of this process disempowers and de-skills teachers, leaving them the intellectually barren work of monitoring student performance based on computer-assessed tasks.

The presence of portfolios in this largely technology-driven vision is not enough to make it worthwhile. As Nancy Bailey points out, this vision misses so many “competencies” that cannot be measured by tests or through a computer. Once again, this is old wine in a shiny new bottle, and once again, it has become vinegar.