Why did polls fail to predict Trump's election?

News

  • Author: Carlos Alberto Gomez Grajales
  • Date: 08 June 2017
  • Copyright: Image appears courtesy of Getty Images

It is now universally acknowledged that we pollsters had a very bad year in 2016. The well publicized fiascos that we experienced in Colombia, Brexit and in the U.S. Presidential Elections made quite a compelling case for a dramatic story of Polling failures. I’ve had the chance to personally comment on the topic [1], suggesting a deeper review on the statistics behind our polls, but a few days ago a more remarkable and interesting commentary of the 2016 U.S. election was published.

thumbnail image: Why did polls fail to predict Trump's election?

The American Association for Public Opinion Research (AAPOR) presented their 2016 review, an extended report that examines polling during last year’s long primary and general election campaigns for the presidential elections, in which Donald Trump became president [2]. You probably imagine this report was commissioned the moment the official results were coming in, and pollsters everywhere realized the PR disaster they would soon face, but in reality, the report was scheduled since May 2016, when the general election campaign hadn’t even started [3]. As such, the main objectives of the report were to measure the overall accuracy of the polls and investigate whether certain collection procedures were more accurate than others (i.e. is online polling worse than phone surveys?) [3]. After the results, the report also became an opportunity to understand what had gone wrong and how pollsters could prepare in the future for such scenarios.

One of the first findings in the report is the accurate depiction of the polling situation in 2016. National polls were fairly accurate overall, as they ended predicting that Clinton would win the popular vote with a 3 percentage point lead, when she ultimately won it by 2.1 percentage points [2]. The real problem happened at the state level, particularly in the Upper Midwest (Pennsylvania, Michigan, Wisconsin, etc).

As to why the polls were unable to accurately detect Trump’s vote share, the report describes two main suspects, which are worthy to discuss in detail. The first issue that accounts for lower Trump support in polls was a sizable chunk of the electorate that decided during the last few days. Considering both candidates had historically low favorability ratings, it is understandable that voters had a hard time deciding, taking their choice up to the final days. This effect was particularly relevant in some Battleground states, as exit poll data confirmed. In Michigan, Wisconsin, Pennsylvania, and Florida, all states which Clinton narrowly lost, 11 to 15 percent of voters reported deciding for whom to vote during the last week of the campaign. These last deciders broke for Trump by nearly 30 points in Wisconsin, by 17 points in Pennsylvania and Florida, and by 11 points in Michigan [2].

Some may confuse the late decider effect with the “Shy Trump” theory, which states that pro-Trump voters misreported their vote intention as supporting Trump wasn’t a socially acceptable conduct. In reality, the report found no strong evidence of such effect. Callback studies, which reach the same persons before and after the election, show that only 11% of respondents switched their answers, a historically common number. I personally have never believed in any of the “Shy” theories, and the fact that both the British Polling Counsel and the AAPOR have failed to find significant evidence of its existence makes me disregard such notions even more [1] [2].

The second culprit seemed to be unrepresentative samples [2]. It is true that non-response and sampling coverage issues may be addressed by weighting, but a particular group that was underrepresented, particularly in state-level polls was that of non-college graduates. The AAPOR report found that not many pollsters weight based on education, something that is usually not necessary, as low educated people tended to vote very similarly to high educated ones. But this particular election, considering the level of attraction Trump had among the non-educated, weighting by education happened to be a crucial factor. Two pollsters reported running again their estimations by reweighting their data to account for differences in education. That sole adjustment drastically improved their poll’s accuracy.

There are many other interesting topics discussed in the report, ranging from the impact ex-FBI director James Comey had on the election, to the effects different collection methods had in the accuracy of vote share estimates (Spoiler alert! Not much!). I would strongly advice any reader interested in political polls and elections to take a look at the report, and I would almost enforce any researcher working with surveys to give it a good read. As scientists, making mistakes make us better, so we should really take advantage of everything we do wrong, which is a lot.

REFERENCES:
[1] Grajales, Carlos Alberto Gómez. Why do polls keep failing everywhere? StatisticsViews Website (Nov, 2017)
http://www.statisticsviews.com/details/feature/10094931/Why-do-polls-keep-failing-everywhere.html
[2] Ad Hoc Committee on 2016 Election Polling. An Evaluation of 2016 Election Polls in the U.S. American Association for Public Opinion Research (May, 2017)
https://www.aapor.org/Education-Resources/Reports/An-Evaluation-of-2016-Election-Polls-in-the-U-S.aspx
[3] Desilver, Drew. Q&A: Political polls and the 2016 election. Pew Research Center (May, 2017)
http://www.pewresearch.org/fact-tank/2017/05/04/qa-political-polls-and-the-2016-election/

Related Topics

Related Publications

Related Content

Site Footer

Address:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.