Menu Close

Revealed: why the polls got it so wrong in the British general election

Absolutely definitely Labour? Ok thanks bye! Shutterstock

Since the surprise result of the British election in May 2015, there has been plenty of speculation about why the opinion polls ahead of the vote were so wrong. On average, they put the Conservatives and Labour neck and neck, when in fact the Conservatives were seven points ahead.

Hard evidence on the reasons for their failure, however has so far been less plentiful. But a new report published today provides important evidence on what really happened.

The report presents the results obtained by the latest instalment of NatCen’s annual British Social Attitudes survey, which was conducted face to face between the beginning of July and the beginning of November last year. All 4,328 respondents to the survey were asked whether or not they voted in the May election and, if so, for which party.

What we found suggests that the main reason for the disparity between the polls and the actual election outcome is unlikely to have been failure by voters to be honest about how they planned to vote. Instead it is more likely that the problem lay in the failure of the pollsters to interview the right mix of voters in the first place.

A different approach

The British Social Attitudes survey is conducted in a very different way from the polls. Not only does interviewing take place over an extended period of four months, but during that time repeated efforts are made, as necessary, to make contact with those who have been selected for interview.

At the same time, potential respondents are selected using random probability sampling. This means that more or less anyone in Britain can be selected for interview, while their chances of being selected can also be calculated.

Political opinion polls, by contrast, are typically conducted over just two or three days. That means they are more likely to represent the views of people who are easily contactable. True, polls that are conducted by phone select the numbers they ring at random, but once the phone is answered, the person at the other end of the line who is selected for interview is not selected in that way. Pollsters often find their calls go unanswered or that the person on the other end of the line does not want to talk.

At the same time, polls conducted over the internet are typically done by drawing interviewees from a panel of people who have either previously volunteered to take part in internet surveys or have been successfully recruited into membership. They are certainly not drawn from the population at random. So in both methods there is bound to be a degree of self selection. And this appears to favour Labour.

Meanwhile, not only did the 2015 polls underestimate Conservative support and overestimate Labour’s before election day, they also came up with much the same result when they went back to interview the same people after the result was in – that is Conservative and Labour more or less neck and neck with each other.

In other words, the polls were still wrong even when the election was over. That means we cannot simply lay the blame for their difficulties on such possibilities as “late swing” or a failure by those who said they would vote for Labour to make it to the polling station. Instead it points to the likelihood that the polls were simply interviewing too many Labour voters in the first place.

How it happened

The British Social Attitudes survey helps shed some light on this. If, in contrast to the polls, it did manage more or less to replicate the election result, that would add considerably to the evidence that the polls were led astray because their samples were not fully representative.

Indeed, the survey did replicate the result, relatively successfully. At 6.1 points, its Conservative lead of 6.1 points matches the actual Conservative lead over Labour of 6.6 points almost exactly.

Reported vote in the 2015 British Social Attitudes survey compared with the actual election result. NatCen, Author provided

Moreover, this is not the only survey to have found plenty more Conservative voters in the election than Labour ones. Face-to-face interviews conducted for the British Election Study (also undertaken using random probability sampling) put the Conservatives as much as eight points ahead of Labour.

That two random probability samples have both succeeded where the polls largely failed strongly suggests that the problems that beset the polls did indeed lie in the character of the samples they obtained.

Lessons for the future

The British Social Attitudes data also provide some clues as to why those interviewed by the polls were not necessarily representative of Britain as a whole.

First, those who participated in polls were much more interested in the election than voters in general. The polls pointed to as much as a 90% turnout, far above the 66% that eventually did vote.

By contrast, just 70% of those who participated in the British Social Attitudes survey in 2015 said that they made it to the polling station. More detailed analysis suggests that many a poll overestimated how many younger people, in particular, would vote. And because younger voters were more Labour inclined than older ones, this created a risk that Labour’s strength would be overestimated among those who were actually going to vote.

No wonder he looks so surprised. PA/Stefan Rousseau

Second, those who are contacted most easily by polls and survey researchers appear to be more likely to have voted Labour than those who are more difficult to find. In the British Social Attitudes survey, no less than 41% of those who gave an interview the first time an interviewer knocked on their door said that they voted Labour, while just 35% said that they voted Conservative.

Only among those where a second or (especially) a third call had to be made are Conservative voters more plentiful than Labour ones. Meanwhile, Labour’s lead among first-call interviewees cannot be accounted for by their demographic profile, which perhaps helps explain why the pollsters’ attempts to weight their data to match Britain’s known demographic profile failed to eliminate the pro-Labour bias in their samples.

Of course nobody is ever going to suggest that a poll should be conducted over a period of four months, though maybe taking a little longer would prove to be in the pollsters’ own best interests, even when their role is to generate tomorrow’s newspaper headline.

But if the objective is to conduct serious, long-term and in-depth research to enhance our understanding of the public mood in Britain, the lesson is clear. Time-consuming and expensive though it may be, random probability sampling is still the most robust way of measuring public opinion. Hopefully it is a lesson that will now be appreciated by those who fund opinion research.

Want to write?

Write an article and join a growing community of more than 182,100 academics and researchers from 4,941 institutions.

Register now