As the American public debates the exploits of Watson (and one commentator suggests it should, among other things, sort out Charlie Sheen’s problem) how about turning over grading essays to computers? There are programs in the works to make this happen:
At George Mason University Saturday, at the Fourth International Conference on Writing Research, the Educational Testing Service presented evidence that a pilot test of automated grading of freshman writing placement tests at the New Jersey Institute of Technology showed that computer programs can be trusted with the job. The NJIT results represent the first “validity testing” — in which a series of tests are conducted to make sure that the scoring was accurate — that ETS has conducted of automated grading of college students’ essays. Based on the positive results, ETS plans to sign up more colleges to grade placement tests in this way — and is already doing so.
But a writing scholar at the Massachusetts Institute of Technology presented research questioning the ETS findings, and arguing that the testing service’s formula for automated essay grading favors verbosity over originality. Further, the critique suggested that ETS was able to get good results only because it tested short answer essays with limited time for students — and an ETS official admitted that the testing service has not conducted any validity studies on longer form, and longer timed, writing.
Such programs are only as good as the algorithm and method behind it. And it sounds like this program from ETS still has some issues. The process of grading is a skill that teachers develop. Much of this can be quantified and placed into rubrics. But I would also guess that many teachers develop an intuition that helps them quickly apply these important factors to work that they read and grade.
But on a broader scale, what would happen if the right programs could be developed? Could we soon reach a point where professors and teachers would agree that a program could effectively grade writing?