Like many people, I am appalled at how little writing American students are asked to do. But when we crotchety advocates complain about this to teachers, we have to shut up when they point to a seemingly insoluble problem.
If we required students to write a lot, teachers would have to do many extra hours reading and commenting on that work. They would have no lives and would have to quit. If we could cut English class sizes in half, the teachers might be able to handle the load, but that won’t happen unless oil is discovered under the football field.
A 21st-century solution, proposed by former Gates Foundation education executive director Tom Vander Ark, is to let computers read and grade the bumper crop of essays. Assessment software, already used to grade essays on the GMAT business school entrance test and other standardized exams, doesn’t need a life and doesn’t cost as much as breathing, pencil-wielding English teachers.
“I want kids to write a lot every day,” Vander Ark wrote on his gettingsmart.com blog. “But many high school classes only require three writing assignments in a semester.
“What if state tests required students to write essays, answer tough questions and compare difficult passages of literature? What if tests provided quick feedback quickly? What if the marginal cost was close to zero? What if the same capability to provide performance feedback on student writing was available to support everyday classroom learning?”
Vander Ark is promoting a competition sponsored by the Hewlett Foundation to show that auto essay scoring is good enough for low-stakes state testing and to encourage better computer assessments. Testing vendors will compete to score accurately thousands of essays gathered from state testing departments. There will be a $100,000 prize for the independent computer scientist who improves most on the big testing company programs.
This sounds thrilling, except for those gasps of exasperation from every faculty room in America. Maybe machines can score a test and approximate human ratings of grammar, sentence structure and relevance on the limited five- or six-point scales used in such exams, but that is not the same thing as teaching writing.
Some students will need fairly simple adjustments — shorter sentences, more active verbs. Most will need more than that. A teacher may urge them to junk confusing reasoning, or develop respect for the accuracy of sources or show understanding of contrary views.
There aren’t any computers that can respond gently to the objections students often have about corrections to their work: Why shouldn’t I use as many adverbs as I like? How can you say that fact is wrong when I found it in The Washington Post? What my grandma told me is very relevant to this topic, and you don’t even know her!
I didn’t learn to write until I was in college, and only after I joined the student newspaper. That extracurricular activity had more than enough veteran student journalists eager to tear my stuff apart and show me how to put it back together. That is different from the typical English class, where a good teacher can impart some wisdom going over a sample essay on the overhead projector but cannot give quality time to every student.
Vander Ark says computers provide feedback on multiple traits but have some limitations. He says they aren’t “very good at evaluating the logic of an argument” and “aren’t designed to replace teacher feedback.”
He told me that he hopes that computer scoring “will allow teachers to assign more writing and spend more time on value-added instructional activities.” They can spend less time reading and more time explaining. Still, until the English teachers tell me that the computer guys have come up with something that does what they do, I am not going to put much faith in the printout that says I got only a 3 on the essay I thought was a brilliant 6 and doesn’t tell me why.