The Value of Polls

Polls get far too much attention in the press, especially several months before an election. Nonetheless, the misnomers about the polling process never cease to amaze me. I’ve addressed polling in the past, but I’m going to update my comments in this post.

Who you ask, what you ask, how you ask, and how you tabulate the results are critical. Polling is a science, but it’s also a social science. Instead of measuring chemical reactions in a lab, you are measuring attitudes. People are much more complicated than chemicals, they don’t always tell the truth, and election polls must factor in one’s likelihood to vote. For these reasons, even the best professional pollsters get different answers when they are polling on the same issue.

Of course, another reason poll results can vary widely is that some pollsters are biased and seek certain results as a means of influencing behavior. If polls show that most Americans support candidate A, then some supporters of candidate B might give up or blame the candidate, and undecided voters—wanting to get behind a winner—might tend to support candidate A as well. I think the actual behavior of most voters is not affected by polls, but this isn’t the point. Even if poll results only influence the activity of 2-3% of voters, they can easily swing an election.

Leading questions and poor wording are common problems. Consider the question, “Do you think Hillary Clinton should be held accountable for using a private email server during her tenure as Secretary of State? Even if respondents share a common understanding of the scandal, “held accountable” might mean a reprimand to some and jail time to others. The results from questions such as these are simply not valid.

Even when questions are clear and worded correctly, some errors are difficult to overcome. Election pollsters try to survey a small group of voters that precisely represents the population of actual voters. A common misconception is that you cannot produce a reliable poll with only several hundred respondents. Statistically, you can. The real problem is how you select the respondents. Phone polls are common, but they not include lots of voters who don’t have landlines, don’t answer calls from unknown callers, or do not respond in a truthful manner.  Pollsters try to account for this problem by adjusting results, but they are really guessing.

Some errors are difficult to calculate from the outset. For example, should polls include candidates other than Clinton and Trump? The knee-jerk answer is yes because they will be on the ballot, but history—which can be wrong—tells us that most third party supporters end up voting for one of the two major candidates. Gary Johnson might poll at 10% in some states, but how many of his supporters will switch to Clinton or Trump in November? Nobody really knows. This issue alone is enough to weaken the results.

If you really like to follow polls, I suggest tracking averages at Real Clear Politics. By incorporating lots of polls into a composite, they probably average out some of the error on both sides. This is still just a best guess, but it’s better than relying on a single poll. At the end of the day, however, I suggest that you treat all polls with serious caution. They can and will be deceiving.

3 thoughts on “The Value of Polls

  1. These polls have so many leading questions. I heard one that asked people if gun control would reduce the number of accidental deaths. Of course it would, but that’s not the only thing to consider.

Leave a Reply

Your email address will not be published. Required fields are marked *