Joanna Adams sent me some questions regarding the way journalists report on elections. I will post her questions and my replies. I don’t think my replies are particularly brilliant. But I think this might be of interest to you, if only to see the questions that this journalist was asking.


1. There is a “standard journalistic manner” for using statistics in news stories: fact, source, and conclusion. In what ways do journalists get statistics wrong, and misinterpret the information?

2. What are the dangers of journalists overstating or understating statistics, particularly if they come from less credible sources?

3. The media relies very heavily on political polling to carry the news narrative – is there a more scientific or accurate way to get a representation of what the public wants?

4. Why do journalists prefer a horse-race style of reporting on elections, as opposed to writing about policy?

5. What are some of the biggest misconceptions that reporters make when writing about the poll’s margin of error?

6. Should statistics, or numbered results, be used in a headline?

7. There is a lot of pressure to crown a victor in political reporting, and oftentimes the media gets it wrong. How does this impact the public’s perception of journalists, and what can reporters do to avoid “getting it wrong?”

8. Are journalists to blame for reporting on misleading information, or are they victims of using the data presented in front of them?

9. If, like you write, “your conclusion is only as good as your data,” how does this impact cyclical quantitative inaccuracies?

My reply:

1. Overall I think journalists use statistics in a reasonable way. My impression is that journalists are often too accepting of low-stakes stories but tend to be properly skeptical about claims with serious policy implications.

2. One problem with journalists being fooled by bad statistics is that it encourages some scientists to make sensational claims in order to get favorable press.

3. I think that polling is an excellent way of learning about public opinion, as long as you realize that public opinion can change.

4. One rational reason for journalists to prefer horse-race reporting is that most newspaper readers already have a strong preference for one party or the other. The majority of readers don’t have much motivation to hear the facts about their candidate, but they’re still very interested in who will win.

5. The biggest misconception regarding the margin of error is to forget that a poll is a snapshot, and opinion can change.

6. I would be happy to see numbers in a headline. I prefer a number such as 65% to a vague word such as “most.”

7. I don’t understand this question. Who is the victor here?

8. I think sometimes journalists get fooled, other times they are willing accomplices to misinformation. The tricky thing is that some pollsters are honest while others appear to be cynical manipulators of the truth.

9. I don’t understand this question. What is a cyclical quantitative inaccuracy?

[Cross-posted at The Monkey Cage]

Our ideas can save democracy... But we need your help! Donate Now!

Andrew Gelman is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University.