Translate

Tuesday 30 April 2013

The debate about automated scoring rages on! This article addresses the issue of the accuracy and validity of computer-scoring of essays and other written responses.

The thing to remember is that automated scoring applies Natural Language Processing using a variety of algorithms that look for patterns in the responses that correspond to those seen in essays that have been scored by humans. The system has to be trained on some essays rated already by humans, so all it can do is look for features of the answer that correlate to those seen in the human-scored training samples. In other words, they are only as good as the samples they were trained on.

The other thing to bear in mind is that maybe the best use of the technology is not in high stakes testing, but in supporting teachers in the classroom to relieve their scoring load. Also for online classes like the MOOCs where the student numbers prohibit human scoring.

I'm sure the debate will continue to rumble on, but automated scoring will improve as machine learning techniques get better and I think that the technologies can be of real help to teachers.

Computer says no: automated essay grading in the world of MOOCs
PC Authority
ETS uses the e-Rater software, in conjunction with human assessors, to grade the Graduate Record Examinations (GRE) and Test Of English as a Foreign Language (TOEFL), without human intervention for practice tests. Both these tests are high stakes – the ...


Saturday 27 April 2013

Questioning the System

A colleague sent me a link to this video in which Sullibreezy questions the education system and the value of exams


I Will Not Let An Exam Result Decide My Fate||Spoken Word
http://www.youtube.com/watch?v=D-eVF_G_p-Y

He raises some valid points about the purpose of education and the way that we assess learning. I like his questioning of why we treat all students as if they are the same when they differ greatly in their strengths and weaknesses. The question he doesn't answer is what should education be in the future. That is for us to decide.

Thursday 11 April 2013


A couple of days ago I was sitting in on the educational technology strand sessions of the 2013 NARST conference and getting frustrated!

A wonderful thing that is happening is that many researchers are developing interactive learning and serious gaming environments that are teaching kids to develop content knowledge and skills in science. And they are collecting oodles of log data from those environments. 

The other good thing is that they are aware that they need to be assessing the learning as the kids go through the experience, but most are not well informed about how to do so. I keep seeing people trying to find the patterns of learning in the data to determine where they should embed assessments and what those should be measuring. 

I wish that more of them would be thinking about assessment using the Evidence Centred Design (ECD) approach. If they thought about what they want kids to know and be able to do, then determine what evidence they would accept that students have those knowledge and skills, and then design tasks that will elicit that kind of evidence. Then they wouldn't need to be data mining to find out what to measure. They need educational measurement experts in their projects!