Yesterday Nate Silver tweeted:
Retweeting individual poll results is probably disinformative on balance.
Which I gladly retweeted. And then came the new Pew survey. I’m sorry, I mean THE NEW PEW SURVEY. The collective reaction of political commentators can be summarized with the frontpage of the Huffington Post:
Let me say why this constant attention to the latest poll—if not quite living up to the deliberate hyperbole of this post’s title—is pathological nevertheless.
1) Uncertainty, uncertainty, uncertainty. Poll results vary for random reasons—that is, because of sampling error. Everyone “knows” this, but to see headlines like HuffPo’s, you’d think that Pew was The One Blessed Holy Gospel Account of American Public Opinion. However, we don’t actually know the true proportions of Obama and Romney supporters in the public. There is no way to know definitively which poll is “the truth.” We have some sense of which polls have smaller or larger house effects in this cycle—i.e., which ones are farther from or closer to the average of the polls—but, as Drew Linzer pointed out, the estimates of house effects also come with substantial uncertainty themselves. So not only don’t we know Obama’s or Romney’s true share of the vote at this moment, but we don’t know with much confidence how much any pollster might be systematically overestimating or underestimating Obama’s or Romney’s share of the vote. For that reason, if you just want to know where the horserace stands, look at the average of the polls and ignore the individual polls.
2) The polls that are the most “surprising” are often the least diagnostic. One possible reason to pay attention to Pew is that it appears to have a Democratic lean, so if it finds Romney on top, that must be newsworthy. Let’s assume Pew has a Democratic lean. If so, it’s possible that the Pew result is significant in this way, but I think the opposite is more likely: results like this one are outliers. Indeed, this Pew survey is out of line with most other polls conducted since the debate. So rather than being more diagnostic, it is less. (Note that this is not a slam on Pew, which is a very scrupulous pollster. But the kinds of random variation I’m talking about having nothing to do with the quality of a pollster.)
3) Tweeting every individual poll encourages the amateur Talmudists. Once people’s attention is drawn to some individual poll, then they start picking it apart. How many Democrats and Republicans were in the sample? What percent of blacks supported Obama? Did it sample registered voters or likely voters? How could Obama be ahead by this much in such-and-such state but behind by that much in some other state? IS THE POLL SKEWED OR UNSKEWED? And so on and on and on. Such questions might occasionally illuminate flaws in a poll, even though the party composition of the sample is not some gold standard for a poll’s quality. But the problem is this: the kinds of “lessons” people learn by delving into a poll are even more tenuous because the uncertainty that applies to the toplines applies in greater measure to the cross-tabs. What is the margin of error on any nationally representative poll’s estimate of the views of men, women, whites, blacks, Jews, left-handed college professors, Southerners, etc., etc.? It is large. Yes, everyone “knows” this too. But that doesn’t stop the Talmudists from weighing on a poll’s quality using criteria that are sometimes misguided and with a degree of certainty that is almost always unwarranted.
4) Tweeting every individual poll unnecessarily reduces our confidence in polls. The polls this year are well-behaved. Let me repeat: they are well-behaved. Go read Drew Linzer’s post at the link. This is far from the impression that people have, however. Why? When commentators fixate on every individual poll—and especially on the polls that are far from the average and therefore surprising—they give the misleading impression that the polls are “all over the place” or “crazy” or “inexplicable.” And this makes some voters suspicious. They think that polls suck. They think that pollsters are manipulating the results. After all, why are they getting such different answers? Here’s one example from someone replying to my earlier tweet:
Aug. Pew poll was 6 pts out of whack pro-Obama. This one 6 pts. out too. Like ump making up for blown call.
Of course, that’s not what Pew was doing. But this comment manifests the skepticism that pollsters confront. Although pollsters shouldn’t be treated like oracles, they don’t deserve these conspiracy theories either.
I’m not suggesting that everyone who tweets a poll result is unaware of these points. But I am also well-aware of the incentives of news organizations to find the newest, shiniest thing and shout about it.
So even though I don’t expect this little blog post to make any difference, we’d truly have a much better conversation about this election if people would stop fixating on every single poll.
[Cross-posted at The Monkey Cage]