One lingering issue left over from the Eric Cantor defeat is how, exactly, Cantor pollster John McLaughlin would up being so catastrophically wrong in his pre-election survey showing the House majority leader heading towards a landslide win.
At The Upshot, political scientist Lynn Vavreck doesn’t exactly answer the question (other than defending “the science of polling” from McLaughlin’s “error of judgment”). But she does come close to saying that the poll couldn’t have been more wrong had the pollster been tossing out numbers randomly. This reinforces the suspicion that McLaughlin’s poll had purposes other than accurately predicting the outcome (say, trying to influence the outcome by pre-spinning it).
But Vavreck says something else that should be of interest to those Democrats who are trying very hard to bend traditional turnout patterns this November:
It’s getting harder for pollsters to identify the right set of people from which to draw a sample as campaigns in low-turnout elections become better at mobilizing voters, many of whom pollsters hadn’t anticipated voting. It’s a bigger challenge in primaries and midterm elections as far fewer people participate in these contests relative to presidential races. Using lists of registered voters and their turnout histories may have been a way for pollsters to gain efficiency in identifying potential primary voters in the past, but that efficiency is now looking like a liability.
If indeed the DSCC’s Bannock Street Project proves itself to represent a high water mark in the ability of campaigns to mobilize voters in a midterm election, then you can expect some inaccurate polls this fall, especially among pollsters who use “likely voter” screens based on past voting history or some arbitrary benchmark (usually called “weighting”) for various groups’ percentage of the electorate. And I’d say a good thing to watch for is whether there is a divergence between those kinds of poll and those that depend more on voters’ expressions of their own interest in voting.