The US Presidential Election: The end of Political Polling?
“The distinction between one kind of poll and another is important, but it is also often exaggerated. Polls drive polls. Good polls drive polls and bad polls drive polls, and when bad polls drive good polls they’re not so good anymore.” - Nate Silver
On the eve of this week’s Presidential Election result, pollster and critics alike seemed certain of a Clinton victory. Nate Silver, the man who correctly predicted the result in all 50 states in 2012, and his website 538 gave Clinton a 71% chance of victory. Other media organisations all gave her similar chances ranging from a 70% to 95% chance of reaching 270 electoral votes.
However, as we now know this was not to be with Trump smashing all expectations. Whilst perhaps for most people, and understandably so, understanding why the polls were wrong is probably not the most important issue right now. The political implications of the election are not necessarily for us to talk about, but the impacts of this most recent failure of political polling is something, as a Market Research agency, we are interested in digesting. Polling has played an important role in politics for many years, and for it to continue in confidence requires prompting awareness of the issues underlying this industry.
The US election and polling failures will be talked about for weeks to come, with investigations into its failure already widely documented. This article instead examines the overarching pattern of failure that has emerged. With recent polling errors, from the UK election in 2015 to this year’s EU referendum, the polling industry is under increased scrutiny.
To help understand the general pattern of failed predictions it is necessary to go back and examine the roots of political polling. Its origins emerged during the 1936 US Presidential election which was the first election that used scientific sampling. George Gallup contested that size does not matter. A sample of one million random respondents would not be more accurate in gaining insight of national views than a sample of 2000 scientifically selected respondents that provided a representative sample of the population.
This scientific method was cheap and quick, and provided accurate results. This gave pollsters years of accurate polling. However, as events show this is seemingly no longer the case, with pollsters looking all but helpless in their ability to predict political events. In 2015 Cliff Zukin, the professor of public policy and political science at Rutgers University and the past president of the American Association for Public Opinion Research, claimed “Election polling is in near crisis, and we pollsters know it”. Polling, as I believe, is no longer near a crisis – it is now undoubtedly in crisis.
Trumps new home for the next four years
What has caused this crisis in polling?
The consensus is that there are two general trends behind the increased unreliability of polling. It is a combination of the growth of mobile technology and the decline in people willing to answer surveys.
Developments in mobile technology has changed the way pollsters try to reach out to representative samples. An estimated forty per cent of America’s adults now no longer use landline phones, and in 1991 in America the Telephone Consumer Protection Act banned autodialling to mobile phones and limiting pollster’s ability to call respondents. Polling firms estimated that it used to take around 4,000 calls to collect 2,000 responses; now it needs around 30,000, many of which will go to old numbers no longer working.
This development emerged in conjunction with a decline in response rates. Mark Blumenthal of Pollster.com recalled how, in the 1980’s, when the response rate at the firm where he was working had fallen to about sixty percent, people in his office said, “What will happen when it’s only twenty? We won’t be able to be in business!”
A typical response rate is now in the single digits with pollsters struggling to reach a 10% rate. This difficulty in gaining a representative sample through hard-to-reach mobile technology, and declining response rates, now means that polling is now more time consuming and more expensive than ever. Zukin concludes this has led to compromises in the quality of sampling and interviewing.
Nate Silver observed last year:
“The problem is simple but daunting. The foundation of opinion research has historically been the ability to draw a random sample of the population. That’s become much harder to do.”
With it becoming harder to gain representative samples, combined with the recent events should we expect the downfall of the polling industry? Probably not. It is worth noting that in the recent cases of pollsters getting it wrong, there were those who got it right or cast doubt on the polling accuracy.
During the EU referendum, some online polling came closest to predicting a “Leave” vote. These polls showed a close race with an advantage for “Leave”. YouGov Reported, several days before the vote, a “Leave” victory. In addition, TNS and Opinium had final polls showing a result of Remain 49%, Leave 51%. These were little reported in the media.
Also, looking at the US election; Data journalist Mona Chalabi in looking at three key swing states concluded that Trump would win those states, and the election. This was in August – three months before the election (https://twitter.com/richardosman/status/796354294534565889 - watch the video discussing this here). Polls conducted by the Trafalgar Group had polls demonstrating Trump in the lead on Monday, the day before the election, in Florida, Michigan, and Georgia. Other, smaller and Partisan-Republican polling firms had polls conducted in the final days pointing wards a Trump victory.
The overall theme of these examples is a lack of media attention in reporting them. Furthermore, there is issue over whether the media truly understands what it is reporting on. A prime example comes from the BBC live coverage of the second Presidential Debate where they made note of a scientific poll examining who won the debate. They compared the result of this poll to one conducted online on a right-wing conservative website with no sample controls.
So perhaps this is where partly some of the fault lies. Not with the polls, but with those reporting them. The result of the changes noted prior has been a decline in “high quality polls”, and generally, an increased lack of understanding to what a good accurate poll is and whether the conclusions made are justified. The examples stated above give credence to this conclusion.
Whilst the news media may have a problem on how they report polling results, there is undoubtedly a problem emerging. Whilst some got it right, many got it wrong. If they are to survive, what needs to change? Dan Wagner, the chief analytics officer on the 2012 Obama campaign proclaimed, in regards to political polling that “It’s a little crazy to me that people are still using the same tools that were used in the nineteen-thirties, …”
The End of Polling as we know it?
As many pollsters, have claimed, changes in technology, demographics and in response rates have proved problematic for the pollsters. Consequently, pollsters are attempting to alter their methods and samples in the wake of these developments. There has been the introduction of the automated phone call to make interviewing more efficient and save money. These services tend to make polling faster and cheaper. The internet has provided another avenue of cheap and effective polling, and played a role in those who correctly polled the EU referendum.
However, both are still limited by the issues of representative sampling. Automated phone calls have the issue of having lower response rates than traditional phone calls. Online Polling requires layers of weighting and is reliant on respondents coming to you. And obviously, not everyone has access to the internet, and certain demographics (typically seen to be young and left-wing) are more active in using it.
The answer, as some pollsters feel, is a combination of both. To use telephone and internet polling to get the best of both worlds, and get a representative sample. The trick, as Jill Lepore says, is to find the right combination. As recent events show, this correct combination has yet to be found.
Just a quick side note.
- As you read the title you might wonder why SwissPeaks are keen to write about this topic. I guess two-fold; i) As a research agency we have been around for a few years now and our work allows us to dive into many varied research topics, and political polling as a collection platform fascinates us in terms of the underlying mechanics and how it all works. And ii) as the author of this, my personal academic background is in Politics and Political Research, and so this is a topic that I personally cannot resist talking about.
- I appreciate this blog is quite lengthy. This is a discussion which will reign on for weeks and months, and this is just my small summary of what is an increasingly controversial subject.
- This blog is just part of our on-going discussion here in SwissPeaks on Political Polling. We are due to talk more about Polls and the wider world of Research in a talk developed for students in the “Political Research in Practice” module at Keele University, of which I studied during my time there. Before I started work at SwissPeaks the staple research diet I was used to was “Polls,” but now I am beginning to wonder about “Polls” and if there are better ways for those involved in politics to source key information.
I hope you enjoyed and/or found this article interesting and if you wish to continue discussing this topic please do email me at firstname.lastname@example.org
Alex Browne, Marketing Administrator, SwissPeaks