Sections

Services

Information

UK United Kingdom

Credentials in the cloud: how will MOOCs deal with plagiarism?

Many are proclaiming 2012 is the year of the MOOC — Massive Open Online Course — thanks to the arrival of major players, edX, Udacity and Coursera all started by colleagues from elite American universities…

How will online courses deal with assessment and accreditation? Cloud computing image www.shutterstock.com.au

Many are proclaiming 2012 is the year of the MOOC — Massive Open Online Course — thanks to the arrival of major players, edX, Udacity and Coursera all started by colleagues from elite American universities.

The courses are “massive” with sometimes tens of thousands of students and “open” (free to enrol) but one big unanswered questions is how these courses intend to preserve their credibility in assessment and accreditation?

Not easily is the answer and already there are reports of incidents of plagiarism in some MOOCs. So far the stakes are low, but what will happen when the chance to get academic credit or even a job tempt even the most scrupulous student?

Manipulated MOOCs?

Most MOOCs so far offer quizzes as their main form of assessment – short multiple choice question and answers that are automated. Instructors, of course, cannot be sure who does the quiz but some MOOCs are now choosing other types of assessment that could be more open to foul play.

Coursera, for example, includes submission of essay style answers, graded through peer assessment, because, as the online service notes: “in many courses, the most meaningful assignments do not lend themselves easily to automated grading by a computer.” But of course, when you have thousands of students you have thousands of essay assignments which cannot be marked by one lecturer.

So Coursera has turned to crowd-sourced marking, and claims students can accurately give feedback to other students. This might be true if the assessment were in a traditional course, but with few consequences what’s to stop students from skewing the system?

Cheating online

In assessment, as in life, most people do the right thing, but there are still those that deliberately cheat. Coursera seems to be wide open on this, although they do ask every student to agree to an honour code every time they submit an essay assignment. But human nature being what it is, such statements do not deter scoundrels.

In its section on pedagogy, Coursera says it expects that “by having multiple students grade each piece of homework, we will be able to obtain grading accuracy comparable or even superior to that provided by a single teaching assistant.” But this claim needs scrutiny.

In “traditional” conditions, where sanctions and consequences apply, there’s greater likelihood that students will provide proper peer review of others’ work. But crowdsourcing can’t be relied upon when self-interest is at play.

For example, Trip Advisor is a great idea for booking accommodation, but the holiday maker should always bear in mind that hotel owners can covertly rate their own and others’ properties according to vested interests.

The crowd is not always right; nor is it always impartial.

Moral hazards

All this sounds as if I am being unfairly critical of Coursera: on the contrary, I applaud Coursera for having the courage to try essay-style assessment in the cloud on such a scale. It’s a great challenge, even in traditional modes of higher education.

Let’s not forget that in our current (mostly on campus) educational institutions, assessment is hardly a watertight process. Even in invigilated exams, people cheat, collude, and so on. Not often, but it happens enough for it to be an issue that we can’t dismiss.

In assessments that are not invigilated (essays, reports, take home exams), the person gaining the credits is not always necessarily the sole author of the work. Even so, we currently give credentials in all fields knowing about these uncertainties and mostly we get it about right.

Assessing assessments

Whether on campus or in the cloud, assessment is the most high stakes part of the business of education. The challenge of assessment is to be able to make meaningful judgements about graduates’ current and future capacity to perform within their intended professions.

Authentic assessment is what we aim for: to set tasks which are as similar as possible to the sort of challenges the new graduate will be expected to meet. Perhaps the most authentic assessment we ever undergo is the job interview: a selection panel reads our claim of evidence of achievements, then we get an hour to make our case in the face of random questions.

Perhaps then at least some key assessments during any degree could, first, emulate a job interview and be an oral test which requires the candidate to think on their feet; second, be face to face, so the examiners can see who’s answering the question, and whether they are receiving help from others and third, be on a user pays basis.

Face-to-face in the cloud

How does this all relate to the MOOCs? Maybe in the world of free content, assessment is the part that you pay for (already an option at Udacity).

But maybe this assessment should also include face-to-face tests online? FaceTime, Skype and Jabber are all programs that could enable this and verifying your identity in that mode is not insurmountable.

If MOOCs can be done at scale, so can assessment — but it should be a separate process, on a user pays basis and it should include face-to-face assessment where possible.

The (MOOC) genie is out of the bottle. The challenge now is to add new ways of doing better quality face-to-face authentic assessment in the cloud.

Articles also by This Author

Sign in to Favourite

Join the conversation

6 Comments sorted by

  1. Gavin Moodie
    Gavin Moodie is a Friend of The Conversation.

    Adjunct professor at RMIT University

    I don't know whether the normal expectations of retention and completion apply to MOOCs, but if they do, MOOCs have dreadfully high rates of attrition. Nelson (2012) calculated a completion rate of only 12% for students of Stanford's on line artificial intelligence subject.

    Nelson, Robert (2012) Comment on Marginson, Simon (2012) Online open education: yes, this is the game changer, the Conversation, 16 August 2012, retrieved 16 August 2012 from

    http://theconversation.edu.au/online-open-education-yes-this-is-the-game-changer-8078#comment_63173

    report
    1. Mark Smithers

      logged in via Twitter

      In reply to Gavin Moodie

      High attrition rates don't mean anything with MOOCS. I have signed up for several MOOCS but not finished them. It doesn't mean the course is bad (though some are). There are all sorts of reasons why people start a MOOC but don't finish it including a huge amount of curiosity during this highly experimental phase. Incidentally 12% of 160,000 is still an enormous number of completions from a single course offering.

      report
  2. Dennis Alexander

    logged in via LinkedIn

    Thanks Bev. Assessment is the key to the credibility of any credentialed education activity. Funnily enough, the Council for Aid to Education (CAE) performance task methodology used in the Collegiate Learning Assessment (CLA) is known and can be developed and used by people willing to pay for a little training. The CLA itself has moved to an automated marking platform using, I think, Latent Semantic Analysis (LSA) and multi-rater training technology to assess free-text inputs as well as multiple choice type assessments.
    Even with all of this technology, the key to credential credibility will remain identity verification:
    "Was it the person who is receiving the credential who undertook the learning and assessment?"

    report
  3. David Glance

    Director of Innovation, Faculty of Arts, Director of Centre for Software Practice at University of Western Australia

    Pearson has announced that it will offer proctored exams for $89

    http://www.pearsonvue.com/about/release/12_09_06_edx.asp

    The point of the examinations in MOOCs is for pedagogy not for credentials as no reasonable person would take the results seriously. That does not stop people adding that final step as Pearson has done. Universities in Europe, Canada and now the US will also accept the courses for credits if the exams are proctored.

    Regarding attrition rate - the AI course is actually quite hard - the attrition rates in other courses wouldn't be as high - but I guess the question is - even if it is - 12% of 160,000 is still a large number.

    report
  4. Anne Forster

    eLearning Consultant

    Expecting the commercial infrastructure enablers of MOOCs and other online learning products to also take on the non scalable aspects of managing plagiarism, standards and quality of assessment might be expecting too much.

    Recognition of competencies and knowledge has long been a challenge for higher education, and a growing demand world -wide with increasing numbers of international students. MOOCs might be leading the current media hype but the increase in digital learning opportunities, formal…

    Read more
  5. Mark Smithers

    logged in via Twitter

    I agree that, in the end, you probably need a formal assessment in a testing centre that is paid for by the student (for a reasonable fee). This is the service that Pearson are providing. I am still intrigued by Coursera's peer assessment model and I hope it can work. I don't think it's feasible to provide face to face assessment for very large MOOCS. The logistics just don't work.
    Incidentally I think there are technologies that are currently used for assessing skills in remote areas such as video recording glasses that could provide a more reliable method of verifying authenticity in wider range of areas. New opportunities may arise from technologies like Google Glasses in the future.

    report