Menu Close

Can you trust the EU referendum polls?

In the lead up to the British general election in 2015, the polls were united in suggesting the result was too close to call and that a coalition government was by far the most likely outcome. With the 1992 polling disaster a distant memory, few observers questioned this prevailing view at the time.

As the EU referendum approaches, things are very different. Not only is the 2015 polling disaster still a fresh memory, but the polls are far from a consensus on the likely outcome.

The recent London mayoral election was generally a success for the pollsters but the referendum is a different and higher profile challenge. Recent polling of vote intentions for the EU referendum can be distilled into two main stories. The first is of a consistent difference between telephone polls and internet polls. The former show Remain comfortably in the lead and the latter suggest the race is neck-and-neck.

The second is of a gradual convergence, as support for Remain has declined in telephone polls since a high point in the middle of 2015.

Phone and online polls show significant differences. Author provided

This convergence notwithstanding, there remains wide variability in poll estimates. The 12 polls released in the past fortnight have ranged from a Remain lead of 26% (Ipsos MORI) to a lead for Leave of 4% (ICM).

One of the likely reasons for this variability is the diversity in the methods that pollsters are using. In addition to substantial differences in how respondents are recruited – which remains something of a black box – pollsters also differ in the variables they use to weight their samples and the way they treat don’t knows and refusals, and whether and how they take account of turnout likelihood.

And pollsters continue to make changes to their methodologies with the campaign in full swing. Vote intention polling for parliamentary elections has a strong empirical and theoretical lineage to draw on but the EU referendum is a one-off, cross-party event, making the pollsters’ job considerably harder. These methodological idiosyncrasies probably explain a good portion of the volatility.

The convergence in the polls is consistent with the pattern of herding observed in the 2015 pre-election polls. Herding may come about through pollsters making methodological adjustments in an effort to get the “best” estimate. The industry is currently involved in an intense debate over the merits of different methodologies and this is to be welcomed. A question that must be asked, though, is how the success or otherwise of methodological tinkering should be evaluated. Until we know the outcome of the vote, beliefs about the current state of voter preferences are likely to be influenced by the recent published polls.

One does not need to be a conspiracy theorist to recognise that post-hoc decisions over methodological adjustments can lead to convergence in polls during a campaign.

Online vs. phone estimates

The issue that is getting many people particularly hot under the collar at the moment is the difference between polling modes. A range of different theories have been advanced about which polls are the more accurate. Currently, there is little in the way of agreement other than over the fact that one (or both) must be wrong.

Experiments by ComRes, Populus, and ICM have confirmed that phone polls show substantially higher support for Remain. Analysts Matt Singh and James Kanagasooriam suggest this is because online polls include too few socially liberal voters, who tend to support Remain. They also contend that part of the difference also results from phone polls eliciting fewer “don’t know” responses than online polls. The upshot of that is that when “squeezed”, the undecideds break disproportionately to Remain.

The latter thesis gained some further support from an Ipsos MORI poll, which found that initial “don’t knows” broke 3:1 to Remain when pushed for an answer – although as John Curtice has pointed out, other polls have not found this effect.

So, game, set and match to the phone polls? Not so fast. YouGov has published its own research which suggests that phone polls over-represent university graduates, even after quota controls and weighting adjustments.

This matters for EU referendum polling because education is a very strong predictor of support for Remain. Over-representing graduates biases estimates away from Leave. In a phone poll, which used a field period of two weeks rather than the standard two or three days, YouGov obtained a more accurate education distribution and, notably, a three point lead for Leave.

Yet it is difficult to conclude from this single poll, interesting though it is, that phone polls in general are subject to the education skew and consequent tendency to over-estimate support for Remain. The education problem identified by YouGov refers directly to only two polls conducted by Populus in February and March and, even then, only the February vote intention estimate seems unduly affected. And it is worth noting that Populus’ February online poll showed exactly the same education skew, yet produced a two point lead for Leave. In short, the YouGov analysis has posed some interesting questions for advocates of phone methods but is some way from definitive in its conclusions.

Can we get any additional leverage on the phone vs. online debate? The inquiry into the 2015 general election polls relied heavily on comparisons with the “gold standard” British Election Study (BES) and it is possible to benchmark EU referendum polls in the same way.

In the middle of 2015, when its face-to-face fieldwork was conducted, the BES put the race at 47-30% in favour of Remain. Contemporaneous polls showed an average of 43-36% for online, whereas the figure for phone polls was 59-28%. Thus, the BES showed a higher Remain lead (+17) than online polls (+7) but a smaller lead than phone (+31).

While there are of course many caveats required here, this comparison suggests that the true picture may lie somewhere between the two modes, possibly somewhat closer to online. At the very least it suggests a good deal of caution is needed before concluding that one method is right and the other wrong. That will only be known for sure on June 24.

Want to write?

Write an article and join a growing community of more than 182,300 academics and researchers from 4,942 institutions.

Register now