Fun With Statistics

FUN WITH STATISTICS….I was rummaging through some old files last night and came across something I wrote several years ago about the top ten mistakes that infest day-to-day reporting of numerical and statistical information. Since I now have a blog I can share these with the world, so here they are:

  1. What?s the real income? Money comparisons over time should always be reported in real inflation-adjusted terms or else they’re worthless. In nearly all cases, they should be reported in per capita terms as well.

  2. What?s the survey error? Statistical sampling error in opinion polls is trivial compared to the error from other sources. Things such as question wording, question order, interviewer bias, and non-response rates, not to mention Bayesian reasons for suspecting that even the standard mathematical confidence interval is misleading, give most polls an accuracy of probably no more than ?15%. Example: a couple of years ago a poll asked respondents if they had voted in the last election. 72% said yes, even though the reality was that voter turnout in that election had been only 51%. Most polls and studies are careful to document the statistical sampling error, but who cares about a 3% sampling error when there might be 21 points of error from other causes?

  3. Does A really cause B or might there be another explanation? If A and B are correlated, A might indeed cause B, but it?s also possible that it’s just a coincidence or ? even more likely ? that some third source is causing both A and B. This problem is especially rampant in social science studies where virtually everything is related to everything else and even well designed multivariate analysis is extremely difficult.

  4. Is it the first study? Even putting aside other errors, 95% confidence means there?s a 5% chance that the result is wrong. We only believe that smoking causes cancer because there have been hundreds of confirming studies. Always be cautious about accepting the first study on any subject.

  5. Maybe it really was just a freak chance. ?That can?t be a coincidence? is usually the result of not understanding how many rare things are nonetheless likely to happen once or twice in a population of 300 million. In a large country, there will always be some cities, or some groups, or some people, that are way above average for, say, cancer. The flip side of this is that something that seems dangerous might not really be. 100 kidnappings a year might seem like a lot, but in reality those are odds of one in three million. That’s less likely than the odds of two people randomly picking out the same word from an encyclopedia.

  6. Compared to what? A 5% rise might be good or might be bad depending on whether everything else is growing at 0% or 10%. Which is it?

  7. Is there contradictory data? Two types of publication bias are involved here: researchers often don?t publish null results, and newspapers don?t bother reporting them when they are published.

  8. Statistically speaking, why did the headline number go up (or down)? Did everyone?s income go up 5%, or was it just that Bill Gates? income went up 1000%? Distribution is as important as central tendencies. Check for mean vs. median. The value of statistics is to summarize a large mass of data, but it?s important not to summarize too much.

  9. Was the sample large and unbiased? For example, the original gay gene study used only about 40 people, and that was simply all the data they had. What?s worse, even if you do have a large sample it?s still difficult to ensure that it?s unbiased. Chapter 29 of Dana Milbank?s book Smashmouth is a pretty good down-and-dirty introduction to the delicate and tricky decisions that election pollsters have to make under deadline pressure to try and get accurate results.

  10. Does all the data point a little too cleanly to a single cause? Life is messy. A single report can often produce masses of data and should probably be viewed with suspicion if it claims that every bit of its data can be explained by a single cause ? especially if it’s a cause that the researcher is already known to favor.

Of course, I wrote this before the John Lott fiasco, so perhaps I should add #11: is the person reporting the numbers a dishonest hack?

Washington Monthly - Donate today and your gift will be doubled!

Support Nonprofit Journalism

If you enjoyed this article, consider making a donation to help us produce more like it. The Washington Monthly was founded in 1969 to tell the stories of how government really works—and how to make it work better. Fifty years later, the need for incisive analysis and new, progressive policy ideas is clearer than ever. As a nonprofit, we rely on support from readers like you.

Yes, I’ll make a donation