Have you heard of President Alfred Landon? That's odd, because in 1936 everyone, including Alfred Landon, was sure you would. It was all the result of the most extensive political polls ever conducted. As you may have guessed, the poll had some fatal flaws.
The President Who Wasn't
The most infamous example of flawed polling happened in 1936. President Roosevelt was facing another election, and the governor of Kansas, Alfred Landon, was hot on his heels. Literary Digest launched their own campaign. They would conduct the most extensive survey ever done, in order to predict who would win. Using telephone directories, club lists, and magazine subscription lists, they sent out ten million dummy ballots. The results showed Landon as the comfortable favorite, with 57% of the vote. Everyone expected Landon to win. It was just a matter of waiting for the official votes to come in.
Roosevelt crushed Landon. Landon limped out of the election with 38% of the vote. What had happened? First, Literary Digest had a selection bias. Landon had a disproportionate amount of rich or upper-middle-class supporters. Roosevelt had the support of the poor, and these were the days when the country was climbing out of the Great Depression. There were a lot more poor than rich or middle class. The poor had trouble covering their basic expenses. They didn't subscribe to many magazines, or join prominent clubs, or own telephones. Literary Digest had selected for Landon supporters. The survey is held up as a famous example of selection bias, and rightly so.
The Nonresponse Bias
But that wasn't all. The survey also was an example of nonresponse bias. Ten million surveys were sent out. About two-and-a-half million returned. What about that huge chunk of people who didn't mail their ballot back in? Was it pure chance? Most modern surveyors think it isn't. There's no way to prove that a specific group of the population bows out of certain types of survey (as that would require a survey), but mail surveys have become notorious for being skewed by this bias.
One survey deliberately measured nonresponse bias. Scientists wanted to see if it could change the results of a survey of seemingly innocuous information. Researchers sent out a mail survey about drinking habits, and then followed up with in-person interviews of the nonrespondents. The results of the survey changed when people were interviewed in person, as people who didn't drink at all tended not to respond to the mail survey, but would spend a few minutes talking to a person who showed up at their door. We assume that people who don't respond to surveys don't skew the results, as they have no common factor. The nonresponse bias proves us wrong, even on a survey on something relatively run-of-the-mill. Think about the groups of people who would refuse to respond to sexual surveys, delicate political surveys, or surveys about their jobs.
Perhaps the people who supported Roosevelt felt politics didn't matter to them until election day. Perhaps they were reluctant to speak up, if they heard from the wealthiest citizens that Landon was going to win. Whatever their motivation, some people just don't step forward when people are asking questions - and those people might be more influential than we believe. Consider, when you next hear about a survey, who might decide not to answer.