Menu Close

Report card: how well did UK election forecasters perform this time?

Surprise! EPA/Gerry Penny

When Theresa May announced on April 18 that she would call a snap general election, most commentators viewed the precise outcome of the vote as little more than a formality. The Conservatives were sailing more than 20% ahead of the Labour party in a number of opinion polls, and most expected them to be swept back into power with a hefty majority.

Even after a campaign blighted by manifesto problems and two terrorist attacks, the Conservatives were by election day still comfortably ahead in most polls and in the betting markets. According to the spread betting markets, they were heading for an overall majority north of 70 seats, while a number of forecasting methodologies projected that Jeremy Corbyn’s Labour could end up with fewer than 210.

In particular, an analysis of the favourite in each of the seats traded on the Betfair market gave the Tories 366 seats and Labour 208. The Predictwise betting aggregation site gave the Conservatives an 81% chance of securing an overall majority of seats, in line with the large sums of money trading on the Betfair exchange.

The PredictIt prediction market, meanwhile, estimated just a 15% chance that the Tories would secure 329 or fewer seats in the House of Commons (with 326 technically required for a majority), while the Oddschecker odds comparison site rated a “hung parliament” result an 11/2 chance (an implied probability of 15.4%). Only the Almanis crowd forecasting platform expressed any real doubt, putting the chance of a Conservative overall majority at a relatively paltry 62%.

In reality, the Conservative party lost more than a dozen seats net, ending up with 318 – eight short of a majority. Labour secured 262 seats, the Scottish National party 35, and the Liberal Democrats 12. Their projected vote shares are 42.4%, 40%, 3% and 7.9% respectively.

So did the opinion polls do any better than the betting markets? With the odd exception, no.

Out of the ballpark

In their final published polls, ICM put the Tories on 46%, up 12% on Labour. ComRes predicted the Tories would score 44% with a 10-point lead. BMG Research was even further out, putting the Conservatives on 46% and a full 13% clear of Labour. YouGov put the Tories seven points clear of Labour (though their constituency-level model did a lot better), as did Opinium; Ipsos MORI and Panelbase had them eight points clear on 44%.

Other polls were at least in the ballpark. Kantar Public put the Tories 5% ahead of Labour, and SurveyMonkey (for the Sun) called the gap at 4%. Survation, the firm closest to the final result in their unpublished 2015 poll, this time put the Conservatives on 42% and Labour on 40%, very close to the actual result. Qriously (for Wired)​was the only pollster to put Labour ahead, by three points.

According to the 2017 UK Parliamentary Election Forecast polling model, the Conservatives were heading for 366 seats, Labour 207, and the Liberal Democrats seven. Allowing for statistical uncertainty, the projection was of an “almost certain” overall majority for the Conservatives. The probability of a hung parliament was put at just 3%. All misses – though that doesn’t necessarily reflect on the model, which after all can only be as good as the polls fed into it.

Many others were wrong, too. The 2017 General Election Combined Forecast, which aggregates betting markets and polling models, forecast a Conservative majority of 66 seats. Other “expert” forecasts came from Britain Elects (Tories 356 seats, Labour 219 seats), Ashcroft (363, 217), Electoral Calculus (358, 218), Matt Singh (374, 207), Nigel Marriott (375, 202), Election Data (387, 186), Michael Thrasher (349, 215), Iain Dale (392, 163) and Andreas Murr and his colleagues (361, 236).

So what went wrong?

A moving target

In the wake of the 2015 election, the Brexit referendum and Donald Trump’s victory, forecasters are getting used to fielding that question. But the answer isn’t that difficult: the problem is in quantifying the key factor in the common forecasting meltdown in advance. That factor is turnout, and notably relative turnout by different demographics.

In the Brexit referendum and 2016 US presidential election, turnout by poorer and less educated voters, especially outside urban areas, hit unprecedentedly high levels, as people who had never voted before (and may never vote again) came out in droves. In both cases, forecasters’ pre-vote turnout models had predicted that these voters wouldn’t show up in nearly the numbers they did.

In the 2017 election, it was turnout among the young in particular that rocketed. This time the factor was widely expected to matter, and indeed get-out-the-vote campaigns aimed at the young were based on it. But most polling models failed to properly account for it, and that meant their predictions were wrong.

Polling is a moving target, and the spoils go to those who are most adept at taking and changing aim. So will the lesson be learned for next time? Possibly. But next time, under-25s might not turn out in anything like the same numbers – or a different demographic altogether might surprise everyone. We might not have long to wait to find out.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now