How did the polls get it wrong?

The pundits and polls predicted that Hillary Clinton would win the presidency of the United States. They were, obviously enough, wrong. As would be expected, the pundits and pollsters are trying to work out how they got it wrong. While punditry and polling are generally not philosophical, the assessment of polling is part of critical thinking and this is part of philosophy. As such, it is worth considering this matter from a philosophical perspective.

One easy way to reconcile the predictions and the results is to point out the obvious fact that likelihood is not certainty. While there was considerable support for the claim that Hillary would probably win, this entailed that she could still lose. Which she did. To use the obvious analogy, when it is predicted that a sports team will win, it is obviously possible that it can lose. In one sense, the prediction would be wrong: the predicted outcome did not occur. In another sense, a prediction put in terms of probability could still be right—the predictor could get the probability right, yet the actual outcome could be the unlikely one. People who are familiar with games that explicitly involve probabilities, like Dungeons & Dragons, are well aware of this. For example, it could be true that there is a 90% chance of not getting killed by a fireball, but it would shock no experienced player if it killed their character.  There is, of course, the question about whether the estimated probabilities were accurate or not—unlike in a game, we do not get to see the actual mechanics of reality. But, I know turn to the matter of polls.

As noted above, the polls indicated that more people said they would vote for Clinton than for Trump, thus her victory was predicted. A critical look at polling indicates that things could go wrong in many ways. I will start broadly and then move on to more particular matters.

Polling involves what philosophers call an inductive generalization. It is a simple inductive argument that looks like this:

  • Premise: X% of observed Ys are F.
  • Conclusion: X% of all Ys are Fs.

In a specific argument, the Y is whatever population the argument is about; in this case it would be American voters. The observed Ys (known as the sample) would be the voters who responded to the poll. The F is whatever feature the argument is concerned with. In the election, this would be voting for a specific candidate. Naturally, a poll can address many candidates at once.

Being an inductive argument, it is assessed in terms of strength and weakness. A strong inductive argument is one such that if the premises were true, then the conclusion is probably true. A weak one is such that if the premises were true, then the conclusion is probably not true. This is a matter of logical support—whether the premises are true or not is another matter. In terms of this logic, all inductive arguments involve a logical leap from what has been observed to what has not been observed. When teaching this, I make use of an analogy to trying to jump a chasm in the dark—no matter how careful a person is, they might not make it. Likewise, no matter how good an inductive argument is, true premises do not guarantee a true conclusion. Because of this, a poll can always get things wrong—this is the nature of induction and this unavoidable possibility is known as the problem of induction. Now to some more specific matters.

In the case of an inductive generalization, the strength of the argument depends on the quality of the sample—how well it represents the whole population from which it is drawn. Without getting into statistics, there are two main concerns about the sample. The first is whether or not the sample is large enough to warrant confidence in the conclusion. If the sample is not adequate in size, accepting the conclusion is to fall victim to the classic fallacy of a hasty generalization.  To use a simple example, a person who sees two white squirrels at Ohio State and infers all Ohio squirrels are white would fall victim to a hasty generalization. In general, the professionally conducted polls were large enough; so they most likely did not fail in regards to sample size.

The second is whether or not the sample resembles the population. Roughly put, a good sample recreates the breakdown of the population in miniature (in terms of characteristics relevant to the generalization). In the case of the election polls, the samples would need to match the population in terms of qualities that impact voting behavior. These would include age, gender, religion, income and so on. A sample that is taken in a way that makes it unlikely to resemble the population results in what is known as biased generalization, which is a fallacy. As an example, if a person wanted to know what all Americans thought about gun control and they only polled NRA members, they would commit this fallacy. It must be noted that whether or not a sample is biased is relative to its purpose—if someone wanted to know what NRA members thought about gun control, polling NRA members would be what one would do.

Biased samples are avoided in various ways, but the most common approaches are to use a random sample (one in which any member of the population has the same chance of being selected for the sample as any other) and a stratified sample (taking samples from the various relevant groups within the population).

The professional pollsters presumably took steps to ensure the samples resembled the overall population; hopefully using random, stratified samples and other methods. However, things can still go wrong. In regards to a random sample, there are obviously practical factors that preclude a truly random sample. Also, even a random sample can still fail to resemble the population. For example, imagine you have a mix of 50 plain M&M and 50 peanut M&Ms. If you pulled out 25 at random, it would not be shocking to have more plain or more peanut M&Ms in your sample. So, these random samples could have gotten things wrong.

In terms of a stratified sample, there are all the usual problems of pulling out the sample members for each stratum as well as the problem of identifying all the strata that are relevant. It could be the case that the polls did not get the divisions in American voters right and this biased the sample, thus throwing off the results.

Polls involving people also obviously require that people participate, that they honestly answer the questions, and that they stick to that answer. One concern that has been raised is that since the polls are conducted by the media and people who supported Trump tend to hate and distrust the media, it could be that many Trump supporters refused to participate in the polls, thus skewing the results in Hillary’s favor. A second concern is that people sometimes lie on polls—often because they think they should give the answer they believe the pollster wants. A third concern is that people give an honest answer at the time, then change their minds later. All of these could help explain the disparity between the polls and the results.

Conspiracy theorists could also claim that the media was lying about its results in order to help Hillary, presumably reasoning that if voters thought Trump was going to lose they would either vote for Hillary to be on the winning side or simply stay home because of a lack of hope. As with all conspiracy theories, the challenge lies in presenting evidence for this.

And that is how the polls might have gone wrong in predicting Hillary’s victory.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Trackbacks and Pingbacks: