It’s great when education reporters use available public polling data to help readers understand what others think about key issues, but can be problematic when they rely on one poll’s results that don’t match what others have found.
That’s what happened in this recent AP story that’s been published in various forms in a variety of news outlets. It says:
“A recent Gallup Poll found 55 percent of those questioned opposed linking teacher evaluations to their students’ test scores. Among those with children in public schools, opposition was stronger, at 63 percent.”
But that’s not quite the entire story.
The reality is that results vary from poll to poll — just a little or widely (as is the case with the PDK/Gallup poll cited above, according to this recent MinnPost article).
Education Next got 51 percent in favor on a similar question about “basing part of the salaries of teachers on how much their students learn” — and 50 percent for parents.
So what is a reporter to do? Reporters may not always want to learn the different results, much less explain them. However, USC’s Morgan Polikoff suggests the following helpful alternative:
“The public’s views on teacher tenure are complicated. A recent PDK/Gallup poll found XX% of voters are opposed to linking teacher evaluations to student test scores, but previous national polls using different wordings have found that XX and YY.”
Another way to go would be to contextualize one poll result with others that are contemporary or historical, so that readers know if a poll result is historically higher or lower than it had been in the past or matched what other polls had found on the same topic.
Just looking at the recent EdNext and PDK/Gallup results, there are several other examples where poll results haven’t come out similarly: For example, a recent poll from Education Next finds “a clear majority opposed to parental opt out,” while the PDK/Gallup poll finds the public more or less evenly split. The PDK poll finds nearly two-thirds favor charter schools, while EdNext finds only about half the public is so inclined. EdNext finds an evenly divided public on Common Core while PDK reports a majority opposed.
Where do the differences come from, anyway? “These differences almost certainly arise from differences in the way the question and response options are worded,” according to this recent Education Next story penned by Paul E. Peterson and Martin R. West (Why Do Two Good Polls Get Different Results?). “Finding a neutral question to capture public sentiment accurately is easier said than done.”
Polikoff thinks that questions play a role, but so might technical issues and the number of questions and order in which they are asked.
At a certain point, someone’s going to have to start rating polls based on how well they match each other, their technical attributes, etc. — like FiveThirtyEight already does for political pollsters.
Meantime, reporters are going to have to address poll results that may or may not match up with what others have come up with in the past. PDK/Gallup is going to release an additional set of poll results this month, following up on the poll results issued in August.