Poll Spin Madness

With the polls coming at us more rapidly every day, each greeted with a blare of trumpets from one camp or the other, it’s very helpful to keep a few interpretive rules in mind, particularly one the spinning starts. One is to stay focused on poll averages rather than individual results, and avoid the temptation of excluding numbers (or ignoring pollsters) you don’t like. Another is to look at the trends as reflected in a given firm’s results over time, which will probably be more reliable than the absolute numbers. Still another is to be alert for sketchy or unconventional methodologies.

But the rule that’s probably easiest to forget is to pay attention to sample sizes. Here’s a pertinent warning from just last night by Nate Silver about the “battleground state” subsamples we are beginning to see regularly a day or so after national polls come out:

Monday’s Washington Post poll had Mr. Obama performing better in what it termed swing states than in the country as a whole; the Gallup poll showed just the opposite.

This data is largely useless. A typical national poll might interview 1,000 people, of which perhaps 250 or 300 will live in swing states, depending on exactly how it defines them.

The margin of error on a 250- or 300-person subsample is enormous: about plus or minus six percentage points. (The swing state sample from the Gallup poll was somewhat larger, but still small as compared to the 3,000 or so voters that it interviews for each instance of its national tracking poll.)

In contrast, in the state polls, there are often tens of thousands of people interviewed in polls of battleground states on a given day. (There were about 2,500 on Monday, for example, despite its having a relatively low volume of state polling.)

There is just no reason at all to care about what 250 or 300 people say when you can look at what 2,500 or 3,000 do instead….

And then there’s the whole thorny issue of how you define “battleground states” to begin with, and how you keep Florida’s big numbers from skewing everything.

In any event, poll findings involving small subsamples can be seductive, giving pollsters second-day coverage of their surveys and leading partisans to see big swings in key demographics that seem to explain everything about national trends. But in many cases, you might as well be just making up a nice fantasy to soothe yourself to sleep.

Ed Kilgore

Ed Kilgore is a contributing writer to the Washington Monthly. He is managing editor for The Democratic Strategist and a senior fellow at the Progressive Policy Institute. Find him on Twitter: @ed_kilgore.