Which are Right? Phone or Internet Polls? New Evidence from a NatCen Innovation

Posted on 20 June 2016 by John Curtice

One of the marked features of polling during the referendum campaign has, of course, been the divergent findings of internet and phone polls. Most of the time, polls done via the internet have put Remain and Leave neck and neck while those undertaken by ringing people up on the phone have put Remain ahead. Inevitably this has engendered a lively debate about which set of figures, if either, is correct, while in some cases polling companies have adapted their methodology in order to respond to some of the criticism of both kinds of polling that this debate has provoked.

Meanwhile this debate has come on the back of the difficulties that the polls had last year in estimating correctly the balance of Conservative and Labour support. The official Inquiry into what went wrong argued that the main problem lay with the unrepresentative character of the samples that both types of polling had acquired, a character that the companies’ various attempts at weighting and filtering their data had failed to correct. In contrast, two surveys conducted after the election with randomly selected members of the public (a very different approach to that used by both internet and phone polls), that is, the British Social Attitudes survey and the British Election Study, were both relatively successful at replicating the 2015 election result (unlike the polls, which still struggled to do so even after the event).  The Inquiry suggested, inter alia, that this experience underlined the continued need for surveys conducted via random probability sampling, relatively expensive though that approach is, and suggested that it would welcome an attempt to undertake internet polling via that approach.

That last recommendation has been picked up by NatCen Social Research, who have established a panel of people who were originally interviewed as part of the (random probability) 2015 British Social Attitudes (BSA) survey, and who have agreed to participate in subsequent interviews via the internet, or, if necessary, via the phone. It is the first time that such a probability-based internet panel has been established in the UK. As many members of this panel as possible were surveyed about their views on the European Union between mid-May and mid-June, including how they intended to vote on Thursday. The interviewing was conducted over a lengthy four week period in order to guard against the risk that the views about the EU of those who are more difficult to contact are different from those who are reached more easily – such ‘availability bias’ does after all appear to have been one of the things that helped undermine the accuracy of the polls in 2015.

All in all, 62% of those BSA respondents who agreed to join the panel participated in this survey on the EU, totalling some 1,632 people in total. Interviews were initially collected via the internet, but those who did not respond via that route (together with those who did not have access to the internet) were followed up by phone. The data have been weighted so that the distribution of a range of demographic characteristics together with reported interest in politics in the EU survey matches that for all respondents to the original 2015 BSA survey.

The headline finding from the report of this exercise is clear. Once Don’t Knows are left to one side, the survey estimates 53% would have voted to Remain in the EU if the referendum had been held while the survey was being conducted, while 47% would have voted to Leave. This figure is in between the estimates being produced by internet and phone polls at this time – internet polls were saying that Remain would win 50% of the vote, Leave 50%, while phone polls were calling it Remain 55%, Leave 45%. There has, of course, been a sign of some movement towards Leave since then.

This estimate takes into account the possibility that the outcome might be affected by differential turnout. However, to do so, the survey relies not on respondents’ report of their probability of voting (as most polls do) but rather the propensity of those in different demographic groups to vote in 2015 (as measured by  the 2015 BSA in which the overall level of turnout was only a little higher than the official figure). The effect of this procedure was to add one point to Remain and deduct one point from Leave, primarily because university graduates (who are more likely to vote for Remain) were more likely to turnout on that occasion (and indeed in elections and referendums in general) than were those with few, if any, educational qualifications. Thus at Remain 52%, Leave 48%, the estimate of support for the two sides without taking into account possible differences in turnout still lies in between the figures being produced by the two kinds of polling.

There is one other point to note about this exercise. It has been suggested, not least on the basis of data collected by the British Election Study, that those who say they will vote to Remain are more difficult to contact than those who are inclined to vote to Leave, and that thus polls are at risk of suffering from an ‘availability bias’ in favour of Leave. This survey, in contrast, found the very opposite pattern. Those who were interviewed towards the end of the fieldwork period were more likely to say they would vote to Leave, not least because fewer of them were graduates (in part because those without access to the internet and thus were interviewed by phone are less likely to be graduates). It thus may be unwise to assume that polls conducted over a much shorter fieldwork period are necessarily at risk of failing to find too few Remain voters.

On Thursday we will, of course, find out the truth about the relative accuracy of the internet and phone polls. But using a method that tries to overcome some of the deficiencies in the polls that were revealed on the occasion of last year’s general election, this survey suggests that the truth may lie in between whatever final figures the two methods eventually produce. Should that prove to be what happens, the case for bringing together the advantages of the internet in terms of speed and cost and the strengths of random probability sampling in terms of quality will have been strengthened significantly.

John Curtice

By John Curtice

John Curtice is Senior Research Fellow at NatCen and at 'UK in a Changing Europe', Professor of Politics at Strathclyde University, and Chief Commentator on the What UK Thinks: EU website.

There are no comments

Your email address will not be published. Required fields are marked *