Elections

Why Public Polling Was Wrong But Campaign Reads Were So Right

Christopher Bedford Former Editor in Chief, The Daily Caller News Foundation
Font Size:

MANCHESTER, N.H. — In the two weeks that lead up to Monday’s opening salvos in Iowa, pundits and observers saw something interesting unfold. Sens. [crscore]Ted Cruz[/crscore] and [crscore]Marco Rubio[/crscore] had an extra hop in their step, even while the public media polls remained about the same, with Donald Trump winning almost every one conducted. The secret to these teams’ newfound enthusiasm was simple: They know something we don’t know.

That something, often referred to as “internal polling,” is better described as analytic reads and is the more in-depth cousin of what we get in the newspapers.

0ptimus Consulting, the firm Rubio employs for data and analytics, was one of the teams that were able to tell their candidate what to expect from caucus-goers while Trump was still on the campaign trail mocking internal polls as a waste of money. In emails and phone calls with The Daily Caller, 0ptimus cofounder Brian Stobie pulled back the curtain and explained how his team, working with Rubio’s pollsters, were able to spot — and capitalized on — the Florida senator’s momentum while the media, for the most part, did not.

Timing

Newspapers pull out of polling early. The last public poll conducted before the Iowa Caucuses was by The Des Moines Register, and it was taken three days before the caucuses, essentially losing track of half of the final week of campaigning. That’s important because the final week is when most Iowa (and New Hampshire) voters make up their minds. And so the final week was when the technically-advanced candidates began to pick up on Rubio’s surge.

“Folks being out of the field is partly a defensive posture taken by public polling companies in the wake of past polling inaccuracies,” Stobie told The Daily Caller. “They leave the field specifically so that people can’t say they were wrong. They can say to that accusation that they were out of the field for the last X days, and that when the difference between their reads and the polls happen.”

But while the public polling companies stepped aside, the campaigns went to work. Rubio’s team was in the field taking reads until Sunday evening— the day before the contest. “We saw it,” Stobie told TheDC. “I think the Cruz campaign changing ads to target us serves as evidence they saw it too. In other words, it was seeable if you had been in field, and if you had the right technique.”

Technique

Here’s an obvious issue with public polling: Trust. All polls and reads rely on it to some extent– the question is to what extent. So even once a public polling firm is on the phone with a potential voter, it takes a whole series of questions to ascertain that person’s party, their past voting history and their interest in this election. All of this must be done before any questions can be asked regarding opinions, and all of it is based on trusting the person at the end of the line.

“I am pretty critical of how public polling approaches the challenge of polling these days,” Stobie told TheDC. “They do it by asking people on phone calls questions whose answers they try to use to help them figure out if that person really plans on turning out or not. So, for example, they might ask how interested you are in an election, and if you say something to the effect of ‘very interested,’ you are asked subsequent questions about your opinions. If, however, you say ‘interested’ or ‘somewhat interested’ you are not included in the sample. The problem is we have lots of proprietary data — and there is a body of evidence from public research, as well — that shows these questions are not good for screening people in or out of samples.”

But what’s the go-around? Well, data.

Data

The trick to getting around the questions – and all the trust those entail – is buying your very own voter file. Why ask someone’s party affiliation, voting history and basic interest in the election if you know before you even make the call?

“When you get a phone call from surveys we are behind, I never ask about your excitement, if you plan to turn out, or any other questions designed to determine if you will turn out,” Stobie said. “I am calling you because I already know you will turn out, or have a strong chance of doing so.”

“We know the absolute best indicator of future turnout is past turnout,” Stobie explained. “And unlike questions asked on the survey that you can fudge or give an aspirational answer to, you can’t lie to a voter file– you either showed up or you did not. So we take these files and dial… [the people] we know have shown up into past similar elections, and ask their opinions.”

“Now if that is all we did, we would be in trouble,” he continued. “For example, in Iowa, turnout far exceeded past turnout. If all we had done was call into those who has turned out in the past, we would have missed a lot of these people who turned out. So we call into the people we know have voted in the past, and we call into a group of people we call the expansion universe. These are individuals we model, using a number of techniques, to have the best chance of turning out even though they have not in the past.”

But why don’t public polling companies buy their own voter files? The simple answer is cash — it’s expensive as heck. Analytics firms like 0ptimus spend hundreds of thousands of dollars on their data, and a voter file can run $30,000 for a single state. For a public polling company working with a newspaper or a college to track a race, that’s a lot of additional cost without quick revenue

Are there any public polls worth trusting, then? Sure. But in order to get an idea, it’s important to look at the method. Reporters and pundits often treat all polls as equal, even though one might be based on thousands of contacts and huge troves of data, and the other based on calls to a few hundred Iowans.

“Understanding how they let people into their sample tells you a lot about the quality of the poll’s output,” Stobie said. “If you see a question-based sample, take the results with a grain of salt.”

Action

But all the polls and analytic reads in the world won’t do a thing without a strong candidate and campaign prepared to listen, and then act, on intel coming from the field. It wasn’t just President Barack Obama’s stronger data and analytics that gave him an edge of Hillary Clinton, Sen. [crscore]John McCain[/crscore] and Mitt Romney in head-to-head contests, it was the Obama team’s quick capitalization on this intel.

“We spend 20 percent of our time figuring out what is going on accurately, and 80 percent on what to do because of it,” Stobie told TheDC. “That’s where it gets really fun.”

So what’s next?

Exciting stuff.

“The future of polling is results reported in scenarios,” Stobie concluded. “People miss a lot in reducing opinion reads to a single top-line result. In reality, what we are seeing is the opinion profile of the electorate assuming a certain turnout profile of an electorate.”

“Think of it this way,” he continued. “If I tell you that a state is 50/50 between two candidates, but that all of the supporters of candidate A come from the northern part of the state, and all of the supporters of candidate B come from the southern part of the state, and I ask you what you think will really happen, you will say, I expect a near-tie. But what If I tell you that a snowstorm is going to hit the northern part of the state the day before the election and snow all the way through the election? Now you are going to guess that the candidate in the southern part of the state will win.”

“What you’ve done, without knowing it, is use a turnout scenario to adjust the opinion read you saw in the poll,” Stobie said. “That is the future of polling, and of course not just for weather events. Putting opinion reads through multiple lenses of turnout scenarios is the next basic step polling has to take. Further, a step beyond that is applying different opinion scenarios… to adjust for late-breaking movement. That is what we are working on and where we think polling is going. It’s going to be a lot less thrilling for general consumption because we won’t be able to say definitively ‘the race is going to end 45/55,’ but it’s going to be a lot more meaningful in terms of functional use.”

Follow Bedford on Twitter