Translate

Tuesday 30 April 2013

The debate about automated scoring rages on! This article addresses the issue of the accuracy and validity of computer-scoring of essays and other written responses.

The thing to remember is that automated scoring applies Natural Language Processing using a variety of algorithms that look for patterns in the responses that correspond to those seen in essays that have been scored by humans. The system has to be trained on some essays rated already by humans, so all it can do is look for features of the answer that correlate to those seen in the human-scored training samples. In other words, they are only as good as the samples they were trained on.

The other thing to bear in mind is that maybe the best use of the technology is not in high stakes testing, but in supporting teachers in the classroom to relieve their scoring load. Also for online classes like the MOOCs where the student numbers prohibit human scoring.

I'm sure the debate will continue to rumble on, but automated scoring will improve as machine learning techniques get better and I think that the technologies can be of real help to teachers.

Computer says no: automated essay grading in the world of MOOCs
PC Authority
ETS uses the e-Rater software, in conjunction with human assessors, to grade the Graduate Record Examinations (GRE) and Test Of English as a Foreign Language (TOEFL), without human intervention for practice tests. Both these tests are high stakes – the ...


No comments:

Post a Comment