Polls at the beginning of the 2017 election campaign pointed towards an overwhelming Conservative victory. They were consistently hitting around 45% of the vote – sometimes even higher – with Labour down at around 25%. Over the past six weeks, though, Labour’s ratings have steadily improved and the gap between the two parties has narrowed.
However, there are big discrepancies between recent polls which make them extremely difficult to interpret. Polls always need to be treated with caution. The current variation makes this particularly important.
A YouGov poll conducted on Tuesday and Wednesday put the Conservatives on 42%, just three points ahead of Labour on 39%. YouGov also published a seat projection based on modelling of poll responses which indicated a hung parliament as its central estimate.
The trend of Labour improvement is seen across all the polling companies. But they don’t all agree that the election is as close as the YouGov figures suggest. The Conservative lead was 10% in the most recent Kantar Public poll, 12% for ICM and 14% for ComRes.
Back in 2015 the actual result was a 6.5% lead for the Tories. So if the final result in this election matches those polls, the expected outcome would be a swing to the Conservatives and a bigger majority – albeit not on the scale that the polls were suggesting at the start of the campaign.
Are the polls always wrong?
Some people’s response would be to say that it’s just a waste of time to look at the polls – that they can’t be trusted. It’s true that in recent years there have been some very high profile polling failures.
The most obvious case is the 2015 general election. The polls suggested then that the election was heading for a virtual dead heat but, as we know, David Cameron won an outright majority. The election in 1992 has also gone down in the history books as a spectacular miss. Again, the polls gave no indication that the Conservatives, under John Major, would emerge as winners.
It’s also true that polls in Britain have over a long period of time tended to overestimate Labour and underestimate the Conservatives.
People often point to the EU referendum and the US presidential election as other examples of polling failure. Neither case is straightforward though.
At the referendum the internet polls were by-and-large accurate. It was telephone polls that got the outcome wrong.
In America, the national polls were not that far off the actual result on average. They suggested a small lead for Hilary Clinton over Donald Trump and that is what happened in the national vote. It was polls for some of the individual states that were wildly wrong and gave a misleading impression about who would win.
There have also been some recent successes. The final opinion polls for the 2016 London Mayoral election were very close to the actual result.
At this year’s French presidential election the polls for the first round in April were remarkably accurate. In the second round they were much less successful – they all underestimated Emmanuel Macron’s vote – but few people noticed because they got the overall outcome right.
What went wrong in 2015?
After the 2015 debacle, the British Polling Council established an inquiry to try and work out what had gone wrong. Their main conclusion was that the polls failed because their samples were not truly representative of the voting population. In other words, the people who took part in the polls weren’t typical of the country as a whole.
There was a particular problem with young voters. The ones who agreed to answer pollsters’ questions tended to be more interested in politics and more left-wing than young people generally.
That led to bad estimates of how many young people would actually vote. We’ve known for a long time that young people are less likely to vote than older people. But what the polls failed to pick up was the size of the turnout gap between young and old. That led them to overestimate Labour’s share of the vote.
What changes have the pollsters made?
Almost all of the pollsters have changed their methods in response to the 2015 failure. Adjustments include raising the age threshold for the oldest band of voters, and weighting results by educational background and interest in politics. That’s supposed to guard against having too many people with degrees in the sample or people who are more interested in politics than average.
The biggest changes, though, have been about how to estimate turnout – and especially turnout for different groups of voters.
These changes also explain most of the variation that we’re seeing between the polling companies.
ComRes and ICM now estimate how likely somebody is to vote based on their age and class background. Broadly speaking, young working-class voters are assumed to be much less likely to vote than older middle-class voters even if they say that they will do so. That tends to suppress the estimate of Labour’s vote share.
YouGov and Ipsos MORI have also made changes to how they estimate turnout. They now take into account whether respondents have voted in the past or whether they usually vote. However, these changes have a much smaller impact on their topline voting intention numbers than the ComRes/ICM approach. That means they suggest a closer contest.
We won’t know which method has been more successful until the actual results are declared.