Ezra Klein has an illuminating conversation with himself at Wonkblog today on a topic familiar to all political junkies: the accuracy of presidential election forecasting models. He asked three leading political scientists to build one for him (a fairly simple model utilizing just three variables), and it turns out it’s far from infallible:
The question is what happens when you add contemporary context back in. The model, for instance, assumes that voters will have the same reaction to slow economic growth in 2012 that they would have had in 1996/ or 1964. But the past four years have seen the worst financial crisis since the Great Depression. Voters might be much less willing to forgive slow growth. Or, since many place the bulk of the blame for the crisis on George W. Bush, perhaps they’ll grade Obama on a kind of curve. The model can’t tell us.
And, sadly, neither can the past. Since 1948, there have been only 16 presidential elections. Which is another limit of models like this one: a relatively thin data set spread over a relatively long time. It would be nice to have more examples of presidential elections conducted during once-in-a-generation crises, in the Internet era, with serious third parties, with African American incumbents, with Mormon challengers, etc. And as Nate Silver, a statistician and blogger at the New York Times, points out, these models often do much worse when tested against new elections that are not in the original sample.
Ezra gets into the whole topic by stipulating that the “tornado of idiocy that seems to accompany modern presidential campaigns” probably has little or nothing to do with the likely outcome. He seems to come out pretty much where I am on the subject of “what matters” in elections: somewhere in the vast middle between those who think it’s a remorseless process of objective indicators that make actual campaigns irrelevant, and those who think key moments in a campaign frequently have “game-changing” implications.