Many are proclaiming 2012 is the year of the MOOC — Massive Open Online Course — thanks to the arrival of major players, edX, Udacity and Coursera all started by colleagues from elite American universities.
The courses are “massive” with sometimes tens of thousands of students and “open” (free to enrol) but one big unanswered questions is how these courses intend to preserve their credibility in assessment and accreditation?
Not easily is the answer and already there are reports of incidents of plagiarism in some MOOCs. So far the stakes are low, but what will happen when the chance to get academic credit or even a job tempt even the most scrupulous student?
Most MOOCs so far offer quizzes as their main form of assessment – short multiple choice question and answers that are automated. Instructors, of course, cannot be sure who does the quiz but some MOOCs are now choosing other types of assessment that could be more open to foul play.
Coursera, for example, includes submission of essay style answers, graded through peer assessment, because, as the online service notes: “in many courses, the most meaningful assignments do not lend themselves easily to automated grading by a computer.” But of course, when you have thousands of students you have thousands of essay assignments which cannot be marked by one lecturer.
So Coursera has turned to crowd-sourced marking, and claims students can accurately give feedback to other students. This might be true if the assessment were in a traditional course, but with few consequences what’s to stop students from skewing the system?
In assessment, as in life, most people do the right thing, but there are still those that deliberately cheat. Coursera seems to be wide open on this, although they do ask every student to agree to an honour code every time they submit an essay assignment. But human nature being what it is, such statements do not deter scoundrels.
In its section on pedagogy, Coursera says it expects that “by having multiple students grade each piece of homework, we will be able to obtain grading accuracy comparable or even superior to that provided by a single teaching assistant.” But this claim needs scrutiny.
In “traditional” conditions, where sanctions and consequences apply, there’s greater likelihood that students will provide proper peer review of others’ work. But crowdsourcing can’t be relied upon when self-interest is at play.
For example, Trip Advisor is a great idea for booking accommodation, but the holiday maker should always bear in mind that hotel owners can covertly rate their own and others’ properties according to vested interests.
The crowd is not always right; nor is it always impartial.
All this sounds as if I am being unfairly critical of Coursera: on the contrary, I applaud Coursera for having the courage to try essay-style assessment in the cloud on such a scale. It’s a great challenge, even in traditional modes of higher education.
Let’s not forget that in our current (mostly on campus) educational institutions, assessment is hardly a watertight process. Even in invigilated exams, people cheat, collude, and so on. Not often, but it happens enough for it to be an issue that we can’t dismiss.
In assessments that are not invigilated (essays, reports, take home exams), the person gaining the credits is not always necessarily the sole author of the work. Even so, we currently give credentials in all fields knowing about these uncertainties and mostly we get it about right.
Whether on campus or in the cloud, assessment is the most high stakes part of the business of education. The challenge of assessment is to be able to make meaningful judgements about graduates’ current and future capacity to perform within their intended professions.
Authentic assessment is what we aim for: to set tasks which are as similar as possible to the sort of challenges the new graduate will be expected to meet. Perhaps the most authentic assessment we ever undergo is the job interview: a selection panel reads our claim of evidence of achievements, then we get an hour to make our case in the face of random questions.
Perhaps then at least some key assessments during any degree could, first, emulate a job interview and be an oral test which requires the candidate to think on their feet; second, be face to face, so the examiners can see who’s answering the question, and whether they are receiving help from others and third, be on a user pays basis.
Face-to-face in the cloud
How does this all relate to the MOOCs? Maybe in the world of free content, assessment is the part that you pay for (already an option at Udacity).
But maybe this assessment should also include face-to-face tests online? FaceTime, Skype and Jabber are all programs that could enable this and verifying your identity in that mode is not insurmountable.
If MOOCs can be done at scale, so can assessment — but it should be a separate process, on a user pays basis and it should include face-to-face assessment where possible.
The (MOOC) genie is out of the bottle. The challenge now is to add new ways of doing better quality face-to-face authentic assessment in the cloud.