A wide-ranging study of automated computer essay-scoring software, tested on thousands of sample essays, found a handful of programs "capable of producing scores similar to human scores," USA Today reported.
Computer scoring of essays is a much-debated topic, especially since American students taking SAT college admissions since 2005 have had to write an essay as part of the testing, educators said.
While the National Council of Teachers of English opposes "machine scored" assessments, favoring "direct assessment by human readers," computer-scoring advocates, many of whom are also educators, say the sheer mass of essays being produced by American students cries out for something to help teachers.
Individualized grading by a human reader would be the ideal, Mark Shermis, dean of education at the University of Akron, said, but sheer numbers make that unlikely.
"If every kid in the country had that kind of individualized attention, we might not be having this conversation."
"They really don't understand that most kids are having a hard time communicating at all," he said of those skeptical of machine grading.
Some educators say they're concerned the use of computer grading programs will, in the end, train humans to read more like machines.
"It will get good agreement [between humans and machines] but not necessarily good writing." Les Perelman, director of Writing Across the Curriculum at MIT, said.
Computers should supplement but not replace teachers, Tom Vander Ark of Open Education Solutions, a consulting firm based in Washington State, said.
"I want to see kids writing a lot every day in every classroom across the country and I want teachers, students and parents to have the benefit of more critical feedback," he said. "I want teachers to be able to spend more time on teaching writing and not mechanics of grading."