Scoring student papers with a computer?

An interesting article from Education Week (subscription required?) says, “Study Supports Essay-Grading Technology.”

Can computers really do this?

Can machines to this better than humans?

Any decent word processor today includes things like grammar and spelling check tools, which are important mechanics in any good paper. But, those tools don’t evaluate whether the meaning conveyed is either accurate or well presented.

Can more sophisticated computer systems that use more than current word processor technology really do an overall evaluation of the presentation and meaning conveyed in a paper?

The discussion about that broader question in EdWeek is certainly interesting.

Here are a few other points to consider:

In the real world, actual grading of student written answers on Kentucky’s now defunct KIRIS and CATS assessments was always problematic. To keep costs from exploding, part-time human graders were hired for relatively low hourly wages.

Those part-time graders were given very little time to consider each answer. At best, the process was rushed. At worst, there simply wasn’t enough time for graders to do the job adequately.

Grading of the longer written pieces in student writing portfolios was also problematic. Every audit conducted on portfolio grading showed significant numbers of students got the wrong scores. In the end, after nearly two decades of trying, Kentucky had to abandon portfolios for assessment. The scoring was never good enough.

So, while machine scoring might not be perfect, neither is human scoring.

That leaves a big question: which process will be most affordable and workable in the future? Given the history of human scoring of student papers in Kentucky, and given the caliber of some of the organizations trying to develop machine scoring, I’m not placing any bets.