Sociologist Ed Brant has developed and used a grading program for student papers:
Brent designed software called a SAGrader to grade student papers in a matter of seconds. The program works by analyzing sentences and paragraphs for keywords and relationships between terms. Brent believes the program can be used as a tool to save time for teachers by zeroing in on the main points of an essay and allowing teachers to rate papers for the use of language and style.
“I don’t think we want to replace humans,” Brent says in an article in Wired. “But we want to do the fun stuff, the challenging stuff. And the computer can do the tedious but necessary stuff.”
Using the software still requires work on the teacher’s part, though. To prepare the program to grade papers, a teacher must enter all of the components they expect a paper to include. Teachers also have to consider the hundreds of ways a student might address the pieces of an essay.
Interestingly, one person in the testing business argues that the biggest issue is not how well the software does at grading but whether people believe the program can do a good job:
But it’s tough to tout a product that tinkers with something many educators believe only a human can do.
“That’s the biggest obstacle for this technology,” said Frank Catalano, a senior vice president for Pearson Assessments and Testing, whose Intelligent Essay Assessor is used in middle schools and the military alike. “It’s not its accuracy. It’s not its suitability. It’s the believability that it can do the things it already can do.”
If this were used widely and becomes normal practice, it could redefine what it means to be a professor or teacher. This is not a small issue in an era where many argue that learning online or from a book could be as effective (or at least as cost-effective) compared to sending students to pricey colleges.
I wonder what percentage of sociologists would support using such grading programs in their own classrooms and throughout academic institutions.
As the American public debates the exploits of Watson (and one commentator suggests it should, among other things, sort out Charlie Sheen’s problem) how about turning over grading essays to computers? There are programs in the works to make this happen:
At George Mason University Saturday, at the Fourth International Conference on Writing Research, the Educational Testing Service presented evidence that a pilot test of automated grading of freshman writing placement tests at the New Jersey Institute of Technology showed that computer programs can be trusted with the job. The NJIT results represent the first “validity testing” — in which a series of tests are conducted to make sure that the scoring was accurate — that ETS has conducted of automated grading of college students’ essays. Based on the positive results, ETS plans to sign up more colleges to grade placement tests in this way — and is already doing so.
But a writing scholar at the Massachusetts Institute of Technology presented research questioning the ETS findings, and arguing that the testing service’s formula for automated essay grading favors verbosity over originality. Further, the critique suggested that ETS was able to get good results only because it tested short answer essays with limited time for students — and an ETS official admitted that the testing service has not conducted any validity studies on longer form, and longer timed, writing.
Such programs are only as good as the algorithm and method behind it. And it sounds like this program from ETS still has some issues. The process of grading is a skill that teachers develop. Much of this can be quantified and placed into rubrics. But I would also guess that many teachers develop an intuition that helps them quickly apply these important factors to work that they read and grade.
But on a broader scale, what would happen if the right programs could be developed? Could we soon reach a point where professors and teachers would agree that a program could effectively grade writing?