Menu Close
Can you really identify a person based on the way they type? e y e / s e e /flickr

Coursera to fight online cheating – but do biometrics even work?

There are two elephants in the digital classroom. Or, to be more specific, two big questions:

  • Are Massive Open Online Courses (MOOCs) economically sustainable, a function of the interaction between market demand and delivery costs?
  • Are the students that get credit for completing a MOOC unit the same people who “did” the unit?

It was thus interesting to see Coursera’s announcement late last week that it will be using a mix of webcams and “keyboard dynamics” or stylometrics – the supposedly unique way people hit a keyboard or wield a pen – to deal with online cheating.

The announcement made for a nice media release but unfortunately it won’t make the elephant go away. It is the latest in a history of over-enthusiastic announcements about how stylometrics and other biometrics technologies will vanquish identity theft, academic cheating and other problems.

In essence, Coursera and its Australian fans are asking the wrong questions.

The past 30 years have seen waves of enthusiasm for biometrics, followed by waves of disillusionment among investors and academic grant providers. The aim of biometrics is to provide low-cost and authoritative (i.e. accurate and forgery-resistant) identification of individuals based on innate and stable physical characteristics.

That identification typically involves comparison of a “sample” or “test” with previously registered information that is assumed to be authentic. Biometrics is concerned with what people “are”, rather than with what they know (e.g. a password) or what they hold (e.g. a card).

We have grown comfortable with DNA and with the fingerprint biometric in widespread use for forensic purposes since the 1920s. We are increasingly encountering digital palm scans, retina scans and iris scans for perimeter control in businesses, government offices and research institutions.

Access to some pathology labs, for example, is dependent on your “print” matching a previous print in the digital register maintained by the lab. Matching is networked, meaning it might operate 24/7 and that there is no need for a security guard – sleepy, corrupt or otherwise – to check the often blurry photo ID cards used by people who want to get through the doorway.

We are less familiar with the more cutting-edge, or simply silly biometric technologies that appear in academic literature or in pitches to venture capitalists.

Fingerprints never lie, right? Concentrated Passion/Flickr

Some of those technologies are a solution in search of a problem. Others are not cost effective and are unlikely to become competitive in future. Researchers have promoted the idea of a voice biometric (identifying people on the basis of how they speak), a gait biometric (how they walk), facial architecture (the shape of an ear or the ratios between eye, chin, nose and ear), knuckle creases, lip or eye movement, fingernail scans, skull resonance and even smell.

Governments use face recognition in passports. Banks have trialled and abandoned identification using electronic pens or “keyboard signatures”. Thumbprint access control for personal laptops or mobile phones is recurrently hyped - particularly in the slow news season - and then dies.

All biometric identification can be subverted. Much has not migrated from the lab to the market. The smell biometric seems to fail if individuals have had too much spicy food, irrespective of being scanned with a vacuum cleaner to the armpit. Voice biometrics start to get fuzzy if people have a cold or sore throat.

Digital fingerprint recognition has been subverted by using illicitly acquired “copies” of genuine prints, such as the “latex thumb” or even a gelatine lolly. In the sci-fi film Gattaca Ethan Hawke reminded us that people can gift associates with hair, urine and other matter.

Registers can of course be compromised with, for example, a “fake” identity being undetected because it is an exact match to a corresponding “fake” identity that appears in the register (courtesy of a bribe, relative or friend, say).

So what does this mean for Coursera, and by extension for Australian universities with visions of simultaneously boosting student numbers and cutting costs through virtual delivery of courses to offshore students?

Publicly funded universities are unlikely to provide MOOCs unless they can see the money. Employers and potential students are unlikely to regard MOOCs and Specialised Open Online Courses (SOOCs) as more than free entertainment unless any academic assessment is credible – that is, not readily subverted.

Preventing subversion costs money and requires a real engagement with student behaviour. Students in some overseas markets have a real incentive to cheat and will do so. Misbehaviour may be facilitated by service providers in their countries. Enterprises that already assist students with the International English Language Testing System (IELTS) – that entry ticket to Australia – will move upmarket.

Coursera’s authentication regime is readily subverted. We can be sure that someone is online. But we can’t be sure that the person online is the person being awarded the certificate, that the same person has been online throughout the unit or isn’t being cued by an associate.

There’s no indication that Coursera and partners will invest in authentication by continuous surveillance but one thing’s clear: stylometrics are not a credible fix.

Beware, Vice-Chancellors, of a trip down the biometrics rabbit hole.

Want to write?

Write an article and join a growing community of more than 181,800 academics and researchers from 4,938 institutions.

Register now