Survey Says: We’re Flying Through A Fog
The media is in survey-reporting overdrive. Where the races stand for this fall and which party is leading or catching up. And why not? It’s got to be better than talking about the issues.
But there is trouble in the land of polls. It’s getting more and more difficult to get accurate results. Layer that on top of the fact that most reporters, commentators and pundits know very little about how polling works and the result is that consumers of news are getting terrible information. So, for the closing days of this election, here is a basic primer on political polling and its problems.
Polling is simply determining what a population looks like by sampling a subset or sample of the population. For elections, pollsters have two big problems: 1) getting the right sample and 2) getting the truth out of the respondents. The honesty problem is a tough nut to crack. Most pollsters assume the liars cancel each other out on ballot questions and use subtle language tricks to try to get accurate responses to issue questions.
The challenge with getting the right sample is incredibly vexing and getting more difficult all the time. For any survey to be valid (political or anything else), the results must be drawn from a random sample, otherwise you end up with biased results. You could hardly trust a survey taken just among the first dozen people you see in the White House cafeteria.
Because different groups are harder to reach than others and have different levels of willingness to participate, no pollster can get a truly random sample. Instead, pollsters have to build a sample that imitates random and representativeness.
Most polls sample from 600 to 900 people. Believe it or not (and you should believe it), a sample of 600 people does a pretty good job of representing as many as 10,000,000 people – as long as it is a random sample. And, there’s the rub, it is really, really tough to get the sample right.
The demise of telephone land lines has been a disaster for political polling. In the olden days, if you kept calling and calling eventually you would reach the people you needed. With cell phones, the problem is not so much reaching people, but that call screening is much easier, people live in one place but have a phone with an area code from another, and when you reach someone, they won’t stay on the phone for more than a few minutes. One of the top congressional media consultants confided to me that surveys conducted on cell phones simply cannot last more than 6-8 questions.
The cell phone problem is compounded by the general difficulty in reaching younger voters and lower income voters. Retired voters are relatively easy. While older voters turn out more, younger and low income voters do vote and need to be accounted for, which leads to the problem of figuring out who will turn out.
Every pollster has to put together a turnout model. They assign quotes by age, gender, party and ethnicity to try and match projected turnout – all in the hope of building a perfect imitation random sample. In the end, every pollster has to play some hunches about each party’s turnout effort and how motivated (or not) different sub-groups will be. Leading up to and on election day, even very good pollsters will look foolish just because a hunch or two on turnout fails.
The media know polls can be unreliable, but they don’t really understand why. Pollsters know, which is why they hedge their projections with the infamous “margin of error.” Reporters and commentators attempt to sound intelligent by invoking the margin of error. But the way they report it belies their ignorance. How many times have you heard a reporter say: “Smith leads Jones 42 to 40 percent with a margin of error of plus or minus 5 percent, so the candidates are statistically tied.” Wrong! Smith is ahead and that’s it. They are not “statistically tied.”
The margin of error is what is known as a confidence interval. What is means is that 95 percent of the time the true result of a poll – if you asked every voter in the district would be plus or minus the margin of error. So, if Smith has 42 percent and Jones has 40 with a margin of error of plus or minus 5 percent, then there is a 95 percent chance that Smith has anywhere from 37 to 47 percent, and Jones anywhere from 35 to 45 percent — but not an equal chance. The best estimate for Smith is 42 and Jones is 40.
The estimate is essentially the high point of a bell curve (really a normal curve).
Because only a sample was interviewed, it is still an estimate. The number is not exact. So if you imagine a bell curve, the best estimate in our poll is for Smith to have 42 percent. He has less of a chance to have 41 or 43, less for 40 or 44, and so on. The chances that Smith has less than 37 percent is really, really small. When two candidates are within the margin of error, it does not mean they are tied. It just means the pollster is not quite so certain about the outcome.
Putting together the problems that pollsters face in tracking down voters, getting truthful answers and interviewing the right imitation random sample, leaves us with a lot of uncertainty heading in to Election Day. There are going to be some surprises, as there always are. But this election has more mystery than most. Unfortunately, being truthful about uncertainty is not very profitable. So, as the days count down to Election Day, take the polls with a grain of salt – even better take the polls with an entire shaker of salt.