Short Cuts

How To Read Election Polls Without Losing Your Mind

If 2016 taught us anything, it's that pre-election polls can be wildly unreliable. Here's how to make sense of the data-heavy madness.

by Tess Garcia
Getty Images
Short Cuts

Confused about the latest activist lingo? Missed yesterday’s viral news story? Bustle’s “Short Cuts" does a deep dive for you, sharing all the necessary details to keep you in the loop.

On the morning of Nov. 9, 2016, Americans woke up in shock. Election polls had misguided them, and Donald Trump, not Hillary Clinton, would be president of the United States. As the 2020 election nears, plenty of Americans have lost faith in the polling process altogether. A December 2018 survey from The Hill and HarrisX found that 52% of registered voters are skeptical of polls they see in the news, and 29% don't believe most of them. And the truth is, no poll will ever guarantee an election outcome — they only tell us how a group of people feel toward a candidate at a specific point in time. But there’s still plenty to be gleaned from reading them — if you know how.

What Types Of Polls Exist?

There are seven types of common presidential election polls:

  1. Baseline (or benchmark) polls take place at the start of a campaign to gauge voters’ initial sentiments toward a candidate.
  2. Brushfire polls are conducted to determine changes in voters’ feelings during a race.
  3. Exit polls are physical surveys given to voters exiting polling locations on Election Day. These polls allow media outlets to “call” an election before voting closes, although they’re not always an accurate indicator of the final outcome.
  4. Push polls are worded to encourage particular responses and aim to influence public thinking. They shouldn’t be trusted for objective information.
  5. Public opinion polls are fairly self-explanatory, surveying any number of respondents for their approval or disapproval of a candidate.
  6. Straw polls serve as an impromptu vote on candidates, painting a picture of a race ahead of Election Day.
  7. Tracking polls are concise polls taken daily among the same group of voters at key moments in the election cycle.

How Are Polls Taken?

With the exception of exit polls, very few polls are still taken in person. Before the dawn of cell phones and the Internet, phone polling was the most popular method, but response rates have since plummeted. Today most are conducted online.

Online polling has pros and cons. One major advantage is their convenience, as participants can answer questions whenever and wherever. The at-your-leisure approach also makes them inexpensive to conduct. Yet, per the Pew Research Center, there’s currently no way to draw a random sample of the U.S. population online, making it difficult to ensure that a group of respondents is representative of voters.

To autocorrect for that, one of two measures is usually taken: The first, and less common, approach is a probability-based online panel, which recruits candidates offline via phone or mail before the online survey. It allows for a random sample but is often cost-prohibitively expensive. The second measure, the opt-in poll, recruits respondents online, often through website advertisements. Pollsters must then screen out participants, like bots and children.

Which Polls Are The Best?

According to the Pew Research Center’s "A Field Guide to Polling: Election 2020 Edition," a great phone poll should be funded and administered by a nonpartisan source, select a random, probability-based sample of respondents, call both cell phones and landlines, use live interviewers, and publicize their questionnaire and methodology. Much of the criteria for online polls is the same. For opt-in online polls specifically, evidence should be provided to show that respondents represent all kinds of Americans in the proper proportions to mimic their size in the general population. The most accurate online polls will adjust their surveys to represent the voter population based on variables like race, age, sex, education level, and party affiliation.

Above all, a quality online opt-in poll should discuss how researchers are working to overcome problems in the sampling process. Look for proof that they've considered their shortcomings and have taken steps to autocorrect. If that level of transparency isn’t present, the poll's probably not worth your time.

Based on these criteria, it's worth checking out polling from Monmouth University and the Marist Poll, which are transparent about their methodologies. For more information, check out the website FiveThirtyEight’s ratings of popular election pollsters.

What Went Wrong With 2016 Polling?

A study from the American Association for Public Opinion Research (AAPOR) reports that polls underestimated support for Trump in key states like Pennsylvania, Michigan, and Wisconsin, where Clinton was projected to have more support. There are a few explanations for this, one being the role of undecided voters. In Pennsylvania and Wisconsin, 13% of undecided voters made their decision in the final week of the race. The AAPOR’s evidence shows that some state polling failed to mitigate preventable errors, like having an overrepresentation of college graduates in sample groups. (Per The New York Times, people with more formal education are more likely to participate in polls, so any statewide poll that didn’t correct for the overrepresentation of college grads could have overestimated support for Clinton.)

Can We Correct Polling For 2020?

Pollsters stress that 2016’s mistakes are fixable. Now, less than four months from Election Day, some aspects of state polling have improved. According to The New York Times, fewer voters are saying they’re undecided, and 46% of state pollsters who've published a survey since March 1 have appeared to weight responses by education, up from about 20% of pollsters in battleground state in 2016.

However, The New York Times also notes that 2020 has seen a surge in online-only polls. Plus, a new practice called "recalled vote weighting" is gaining favor. In this method, respondents are asked if they voted for Clinton or Trump in 2016. Pollsters then adjust their samples to reflect the actual results of the election. It’s risky, since people are more likely to say they voted for the winner, which could bias results toward the Democratic Party.