Tick-box surveys aren’t the only way to measure student satisfaction

Happy at the end - but what about on the way through? pigeonpie, CC BY-NC-ND

It’s come as a surprise to many in higher education: students are increasingly satisfied with their experience of English university. A new report published by the Higher Education Funding Council for England has analysed data collected from 2m students over the last nine years for the National Student Survey (NSS).

Overall, students are 5% more satisfied than they were a decade ago – with most improvements coming in the amount of academic support, assessment and feedback and university organisation. The survey also broke down the groups that have been the most and least satisfied: black African students were more satisfied overall than white students. But black Carribean students were less so.

When it comes to subject, those studying to be vets are some of the most satisfied, compared to students of mass communication, who are the least satisfied.

The findings are surprising in the context of two contradictory but common perceptions. First, they fly in the face of much anecdotal evidence I’ve heard that indicates students are less satisfied as a direct consequence of having to pay £9,000 a year fees for their studies. Second, and again anecdotally, some staff working in higher education think university managers use poor results in the NSS to browbeat their staff. What it suggests to me is that a different approach to gathering student feedback is necessary.

No longer ignored

It seems clear that the NSS has forced institutions to take student feedback about their experience seriously. Many of us can recall the days when students’ opinions were held to be of little consequence. Often, students were seen as a barrier to getting on with proper work, rather than people to be consulted. Today, the student voice through feedback surveys is regarded with a greater degree of attention.

By the 1980s, feedback from students was being collected assiduously on every aspect of their experience. But nothing was being done with it – perhaps because it was not seen as important. Only a few institutions, such as the University of Central England, had clear and effective processes where student feedback, collected annually, informed improvements to the institutional environment.

The NSS is currently being reviewed, and an independent report has been published into its design. But we are yet to have to see what changes if any will be made.

Yet there are problems with the survey. In particular, it has created a league table that is built on very faulty basis of a customer feedback survey. Huge falls in position on the (quite small) league table are often the consequence of very small changes to aggregate scores. At a more local level, academics often complain that students condemn programmes without having fully attended or engaged.

Ask me now, not at the end

The NSS is also problematic because it is principally a summary evaluation of student experience at the end of degrees – little can be done to improve the experience of individuals who are moving on. Research in my own institution is beginning to indicate that evaluation processes that take place at strategic points with a view to enhancing the student experience of a programme are much more popular than end-of-course evaluation questionnaires. Both staff and students are often cynical about what they view as “tick-box” processes.

What increasingly emerges as a problem in discussions about the NSS is that it focuses on students’ experience of a programme, and not on wider issues. Evaluation of particular course modules within my own university shows that surveys are felt to be largely a waste of everyone’s time, simply because what works best at this level is dialogue between students and staff.

Instead of a series of post-course surveys, I would recommend a radical (but in fact rather old) solution. Scheduled discussion sessions between staff and students, managed well, can be far more effective in stimulating change than a questionnaire survey.

This is not to argue that large scale surveys have no place in modern higher education. Experience of implementing student feedback surveys during the 2000s has shown me that they work well at institutional level if they pick up on the wider student experience of the institution and are used to improve issues of concern. This cannot be done at national level, because each institution is different. So a survey works better when it is relevant to the needs and experience of the students themselves.

A dual solution of dialogue between staff and students and a large scale survey at institutional level has worked before at several universities and continues to do so across Europe. For example, this has worked well at the University of Lund in Sweden, which used the University of Central England model from the early 1990s.

Sadly, our continuing enthusiasm for university league tables of all kinds in the professional and popular press may mean that we are likely to be lumbered with variations of the NSS for years to come.

Found this article useful? A tax-deductible gift of $30/month helps deliver knowledge-based, ethical journalism.