At some point over the weekend, I’ll try to find time to read all the stuff pollsters are saying about why they were so spectacularly wrong about the UK elections. Many of them are probably asleep with their cell phones silenced after a long, brutal night and morning. Others may just be awakening with hangovers from many sorrows thoroughly drowned.
But in the meantime, someone who is not a pollster, but who is indeed a major interpreter of polls whose own projections rely heavily on polls, FiveThirtyEight‘s Nate Silver, weighed in pretty heavily last night with a warning that this may not be an aberration:
[I]f the polls have a poor election, it won’t be the first time. In fact, it’s become harder to find an election in which the polls did all that well.
Consider what are probably the four highest-profile elections of the past year, at least from the standpoint of the U.S. and U.K. media:
* The final polls showed a close result in the Scottish independence referendum, with the “no” side projected to win by just 2 to 3 percentage points. In fact, “no” won by almost 11 percentage points.
* Although polls correctly implied that Republicans were favored to win the Senate in the 2014 U.S. midterms, they nevertheless significantly underestimated the GOP’s performance. Republicans’ margins over Democrats were about 4 points better than the polls in the average Senate race.
* Pre-election polls badly underestimated Likud’s performance in the Israeli legislative elections earlier this year, projecting the party to about 22 seats in the Knesset when it in fact won 30. (Exit polls on election night weren’t very good either.)
At least the polls got the 2012 U.S. presidential election right? Well, sort of. They correctly predicted President Obama to be re-elected. But Obama beat the final polling averages by about 3 points nationwide. Had the error run in the other direction, Mitt Romney would have won the popular vote and perhaps the Electoral College.
Why the series of misses? Nate mentions a few possibilities:
Voters are becoming harder to contact, especially on landline telephones. Online polls have become commonplace, but some eschew probability sampling, historically the bedrock of polling methodology. And in the U.S., some pollsters have been caught withholding results when they differ from other surveys, “herding” toward a false consensus about a race instead of behaving independently.
All these are big and legitimate concerns. But probably the bigger problem is that such issues will be seized upon by anti-data zealots and “game-change” journalists–think of them as like the old-fart baseball scouts in Moneyball who knew a good player when they saw one–to seek to discredit any objective measurements of public opinion or any analysis based upon them. After all, polls are “wrong,” right? So let’s just wing it with our instincts, prejudices, snail’s-eye observations from the campaign trail (or bar), insider opinions, and of course, first-person anecdotal takes on the mood of the electorate.
That’ll be better, won’t it?