The Divergence Between Phone and Internet Polls: Which Should We Believe?

Posted on 25 May 2016 by John Curtice

The divergence between the estimates of the relative strength of Remain and Leave in internet polls and those obtained by phone polls has been the most marked feature of polling in the EU referendum to date. Typically, internet polls have suggested that the race between the two sides is very close, whereas those conducted by phone have usually put Remain well, if not necessarily comfortably, ahead. The position has left many scratching their heads trying to work out which (if either) is right.

Today we publish a paper that outlines some of the theories as to why the divergence has arisen, and considers the empirical findings of a number of attempts to compare systematically the findings of phone and internet polls.

There are two main kinds of possible reasons as to why internet and phone polls are uncovering different estimates of the relative strength of Remain and Leave. The first set of reasons concern the circumstances in which respondents are invited to answer how they might vote in the EU referendum. Respondents to a phone poll have to declare their views to an interviewer, whereas those participating in an internet poll have the anonymity that comes from responding to a screen. Meanwhile, in a phone poll a respondent can always say, ‘Don’t Know’, in response to a question, even if it is not offered to them as a possible answer. In an internet poll Don’t Know either has to be offered explicitly as a possible answer or not allowed at all.

It has been suggested that those who support remaining in the EU are more reticent than those who wish to leave, and consequently are more likely to say ‘Don’t Know’ if they are invited to do so. Given that most internet polls do offer Don’t Know as an answer, it is suggested that serves to depress the level of support registered for Remain in such polls. Alternatively, however, it is argued that backing Leave might be regarded by some as a socially unacceptable view in some quarters and that consequently some respondents to a phone poll who hold that view may be reluctant to express it.

The second main possible reason why the two kinds of polls have diverged is that the samples of people that they manage to interview differ significantly. Telephone polls are typically done by ringing landline and mobile polls at random, and, if the phone is answered, securing an interview with someone at the other end of the line. Interviewers are usually given a quota of the kinds of people that they should interview (in terms of their sex, age, etc.) and thus if there is more than one person at the end of the line willing to be interviewed, the interviewer will try to get an interview with whoever best helps them complete their quota. The approach relies heavily (though not completely) on the statistical theory that if a thousand people are surveyed at random, most of the time the estimated proportion of people stating a particular view will be reasonably close to the true proportion in the population as a whole.

Internet polls, in contrast, are typically conducted with respondents who have previously joined a panel of people who have agreed on occasion to complete a poll. When they sign up for that task, they tell the polling company quite a lot about themselves. This means the company can draw from the panel a sample whose demographic and other relevant characteristics are in line with those of the population as a whole. This, it is anticipated, will ensure that the views expressed by this sample will be representative too.

Such different approaches to sampling would certainly appear to create plenty of scope for achieving samples that are rather different from each other. For example, internet polls are often thought to be at risk of over-representing the politically committed whose views may not be typical of those of the population as a whole. Meanwhile, phone polls are reliant on those who are available and willing to respond to their calls over a relatively short period of time and it may be asked whether such people are indeed typical of the general population. In this referendum in particular it has been suggested that the samples obtained  by phone polls contain more graduates, many of whom are relatively relaxed about immigration and are thus more likely to be in favour of Remain – though quite why this should be the case is perhaps not so clear.

The paper assesses the empirical evidence provided by a number of attempts that have been made during the referendum to compare the results obtained by phone and internet polls and to understand why they are divergent. It comes to the view that the explanation probably lies primarily in the differences in the character of the samples that the two kinds of poll achieve rather than as a result of the differences in the way in which they administer their questions. It looks in particular at the claim that phone polls contain more people who are educationally well qualified than internet polls, and suggests that the evidence available to date on this subject is both too limited and inconsistent for us to come to a clear judgement. But given the importance in any case of educational background in helping us to identify who is more likely to be a Remain supporter and who is more likely to want to Leave, it would seem that much more attention should be being paid by pollsters to how many graduates and non-graduates they have in their samples than appears to have been the case so far.

John Curtice

By John Curtice

John Curtice is Senior Research Fellow at NatCen, Professor of Politics at Strathclyde University, and Chief Commentator on the What UK Thinks: EU website.

There are 16 comments

Your email address will not be published. Required fields are marked *