Opinion

Is Nate Silver’s value at risk?

Sean M. Davis COO, Media Trackers
Font Size:

Nassim Taleb famously wrote two books — Fooled by Randomness and The Black Swan — about how incredibly complex prediction models can spectacularly fail if just a few underlying assumptions are incorrect. One of Taleb’s targets was a financial model referred to as “Value at Risk,” or VaR. This model attempted to quantify, using historical measures of volatility as a proxy for risk, the maximum amount of money a firm or portfolio could lose over a certain period of time. Many commentators and analysts now believe that a foolish over-reliance on risk-management models like VaR was partly responsible for the 2008 financial crisis.

One of Taleb’s main points is that humans are desperate to view the world as far more rational and predictable than it actually is. If you doubt that assertion, spend a few minutes talking to an insurance actuary. Or take the sub-prime mortgage crash, for example. Bond traders and investment banks and credit ratings agencies swore up and down that a security filled with sub-prime mortgages — that is, home loans that were made to individuals with less than stellar credit — were somehow AAA-rated because there was no possible way all of the loans would go bad at once. And why did Wall Street believe such an assumption was warranted? Because sub-prime home loans had never before all gone bad at the same time. In short: it wouldn’t happen because it hadn’t ever happened.

So what does this have to do with Nate Silver?

Silver stormed onto the scene in 2008 when, according to his acolytes, he correctly predicted how 49 of 50 states would vote in the presidential election (he missed Indiana). Do not remind his disciples that of the four close states — those with margins of 2.5% or less — Silver only forecast three of them correctly. And definitely do not remind them that the polls in swing states correctly forecast all but two states (Indiana and North Carolina).

Silver’s key insight was that if you used a simple simulation method known as Monte Carlo, you could take a poll’s topline numbers and its margin of error and come up with a probability forecast based on the poll. The effect of this method was to show that a 50-49 lead in a poll with 1,000 respondents wasn’t really a dead heat at all — in fact, the candidate with 50% would be expected to win two-thirds of the time if the poll’s sample accurately reflected the true voting population.

To a political world unfamiliar with mathematical methods that are normally taught in an introductory statistics course, Silver’s prophecy was nothing short of miraculous.

But was it? To find out, I spent a few hours re-building Nate Silver’s basic Monte Carlo poll simulation model from the ground up. It is a simplified version, lacking fancy pollster weights and economic assumptions and state-by-state covariance factors, but it contains the same foundation of state poll data that supports Nate Silver’s famous FiveThirtyEight model. That is, they are both built upon the same assumption that state polls, on average, are correct.

After running the simulation every day for several weeks, I noticed something odd: the winning probabilities it produced for Obama and Romney were nearly identical to those reported by FiveThirtyEight. Day after day, night after night. For example, based on the polls included in RealClearPolitics’ various state averages as of Tuesday night, the Sean Davis model suggested that Obama had a 73.0% chance of winning the Electoral College. In contrast, Silver’s FiveThirtyEight model as of Tuesday night forecast that Obama had a 77.4% chance of winning the Electoral College.

So what gives? If it’s possible to recreate Silver’s model using just Microsoft Excel, a cheap Monte Carlo plug-in, and poll results that are widely available, then what real predictive value does Silver’s model have?

The answer is: not all that much beyond what is already contained in state polls. Why? Because the FiveThirtyEight model is a complete slave to state polls. When state polls are accurate, FiveThirtyEight looks amazing. But when state polls are incorrect, FiveThirtyEight does quite poorly. That’s why my very simple model and Silver’s very fancy model produce remarkably similar results — they rely on the same data. Garbage in, garbage out.

So what happens if state polls are incorrect?

In 2008, the RealClearPolitics polling average was incorrect in two states — Indiana and North Carolina. Silver botched Indiana but correctly called North Carolina.

In 2010, it was much worse. State polling averages were wrong in Alaska (they said Joe Miller would be elected), wrong in Colorado (they said Ken Buck would be elected), and embarrassingly wrong in Nevada (they said Harry Reid would be involuntarily retired). FiveThirtyEight incorrectly forecast the winner in each of those states, perfectly reflecting the inaccurate information contained in the state polls.

Thus, of the five major state races in which polls were wrong over the last four years, Silver only got one right. I’m no baseball scout, but batting .200 when it counts won’t get you into the big leagues, let alone the All-Star game.

Silver has made a big deal this election cycle about how state polls are generally more accurate at forecasting the winner of the Electoral College and the popular vote than are national polls. That may well be true, although a Monte Carlo simulation of the final week’s worth of Florida polls in 2000 suggests otherwise.

But assuming it is true, how much actual data in the era of modern polling do we actually have? Maybe three presidential elections’ worth, going back to 2000. Or, if you want to be really generous, maybe eight if you go back to 1980.

Wall Street had exponentially more data when it incorrectly bet on the housing market than we have today when it comes to presidential election polling data. Although we pundits may think a handful of elections qualifies as a robust data set, political polling data simply pales in comparison to the wealth of data we have on the stock market, economic output, or life expectancy, going back to the example of the insurance actuary. But is the science settled on how Stock X will perform tomorrow, or what precise economic growth we’ll see next quarter, or exactly how long each of us will live? Of course not.

This takes us back to Nassim Taleb’s key insight: despite our best efforts, we humans are just not that good at predicting the future. The main assumption underlying Nate Silver’s Obama bet this year is that the state polls will be correct. Maybe they will be, even though three states were wrong in 2010, two states were wrong in 2008, one state was wrong in 2004 (WI), and a very important state in 2000 was incorrectly called by most pollsters.

Nate Silver’s model could very well forecast every state correctly next week, assuming the polls accurately reflect the true voting population. But if they’re wrong, it’ll be Nate Silver whose value is at risk. If that happens, I have a great title for his next book: “The Snake and the Oil.”

Sean M. Davis is the COO of Media Trackers. He received an MBA in finance from The Wharton School in 2010 and previously served as CFO of The Daily Caller.