This discussion arose in the context of statistics teaching:
April Galyardt writes:
I’m teaching my first graduate class this semester. It’s intro stats for graduate students in the college of education. Most of the students are first year PhD students. Though, there are a number of master’s students who are primarily in-service teachers. The difficulties with teaching an undergraduate intro stats course are still present, in that mathematical preparation and phobia vary widely across the class.
I’ve been enjoying the class and the students, but I’d like your take on an issue I’ve been thinking about. How do I balance teaching the standard methods, like hypothesis testing, that these future researchers have to know because they are so standard, with discussing the problems with those methods (e.g. p-value as a measure of sample size, and the decline effect, not to mention multiple testing and other common mistakes). It feels a bit like saying “Ok here’s what everybody does, but really it’s broken” and then there’s not enough time to talk about other ideas.
My reply: One approach is to teach the classical methods in settings where they are appropriate. I think some methods are just about never appropriate (for example, so-called exact tests), but in chapters 2-5 of my book with Jennifer, we give lots of applied examples of basic statistical methods. One way to discuss the problems of a method is to show an example where the method makes sense and an example where it doesn’t.
But I imagine the same sort of thing must arise in political science courses all the time. Do any of you have the experience of having to teach something that you think is misleading or wrong? What do you think of the suggested strategy, “show an example where the method makes sense and an example where it doesn’t”?
[Cross-posted at The Monkey Cage]