Ten Mistakes Reporters Make When Covering Polls

Rick Dunham, the Hearst Washington Bureau Chief, recently created a highly useful list of ten mistakes reporters make when using polls.  This list covers everything from misunderstanding basic statistics, to question bias, to improperly comparing similar, but not identical, questions.  I highly recommend a read of this list, as it should help prevent common mistakes involving coverage of polls in the future.

Rick Dunham's

TEN MISTAKES REPORTERS MAKE WHEN COVERING POLLS

  1. Mixing apples and oranges. The results of one poll can't be reliably compared to another poll. For comparison purposes, it's best to sue the results of the same poll over time. It's also dangerous to compare questions that are worded slightly differently or responses that don't track (personal favorability vs. job approval, for example)
  2. Ignoring the margin of error. How many times have TV news readers noted that a candidate is up or down - or has "surged into the lead" - in the most recent poll when the results are within the margin of error? It is not statistically significant if the change from one poll to another is with the margin of error. Reporters also sometimes forget that the margin of error goes both ways. If a poll's margin of error is 3 percentage points, that means each candidate's total is +/- 3 points from the poll results. So any poll within 6 percentage points would be within the margin of error. Of course, the bigger a lead (within the margin of error), the greater the likelihood that the candidate in the lead is truly in the lead.
  3. Forgetting that polls are a snapshot in time. They are often a few days old when they are released. Some surveys (example: Pew) may have weeks-old data at times. Things can change, particularly in the final days of an election campaign. Case in point: George H.W. Bush's New Hampshire comeback over Bob Dole in 1988. And New Gingrich's surge in South Carolina in 2012.
  4. Forgetting that even a good poll can have a bad day. Remember the fine print in polls: They promise to be within a margin of error with 95% reliability. That means that one out of 20 surveys might be inaccurate. One clue to an outlier poll is to check is to see the percentage of interviewees who are Democrats. Republicans and Independents. If the percentages appear unusual (more Republicans than Democrats, a 10%+ Democratic edge), it means the results may well tilt disproportionately one way or the other.
  5. Trusting the "likely voter" screen. It can be very difficult to predict who will turn out in a party primary. This is particularly true in primaries where independents or all voters can vote in either primary. Case in point: New Hampshire in 2000, where John McCain did far better than "likely GOP primary voter" screens found. Also, many pollsters use different methodologies to determine who is a "likely voter."
  6. Ignoring small sample size. The smaller the number of respondents, the more likely it is to be inaccurate. This is particularly true when a candidate or party wants to get a poll out there in the media. Be particularly wary of very small samples (300 or under) with large margins of error.
  7. Relying on unreliable "quickie" polls. Polls that are done immediately after a given event are notoriously unreliable. Also, quickie polls like a presidential speech are skewed in favor of the President because the audience of viewers is more likely to include supporters than critics or the apathetic majority. A poll conducted over a period of days tends to be more reliable. But "daily tracking" numbers, despite the higher margin of error, can be useful indicators of changes in the electorate.
  8. Accepting partisan polls at face value. Candidates, parties, companies, unions, interest groups commission surveys all the time. They have an agenda. They may release only certain questions from the poll. They may ask the pivotal question after a series of "push" questions to achieve biased results. Ask a lot of questions and demand the entire poll when you suspect the people conducting the poll have an agenda.
  9. "Averaging" a group of polls. With all due respect to Real Clear Politics, it is not scientifically valid to "average" a group of polls. Individual polls may be flawed, and averaging gives more weight to "outlier" polls. However, the RCP method does have some value to give you a clue as to what may be happening.
  10. Writing stories based solely on polls. Polls should inform and confirm your stories, they should not be your only reporting. You can insert poll numbers to back up your independent reporting. They should not be the story.

Bookmark and Share