SKIN IN THE GAME….Back in the 1970s, RAND did a massive healthcare study that tried to determine whether copays affected health outcomes. Several thousand people were randomly assigned to groups that either got free healthcare or else had to shoulder varying amounts of copay, and they were tracked over five years. Long story short, they concluded that people used less healthcare if they had to pay for it, and that this didn’t affect health outcomes. Hooray for copays!
As I’ve become more familiar with the arguments about national healthcare over the past few years, I’ve been startled to learn just how much impact this study has had. Even though it’s one study that was conducted three decades ago, it’s widely considered a “gold standard” among both liberals and conservatives. Everyone cites it. It’s almost totemic in its influence, partly because it’s genuinely considered to have been very well designed and partly for the simple fact that it’s the only one of its kind ever done. Among conservatives, especially, it’s widely viewed as proof that healthcare costs can be reduced without adverse effect simply by forcing patients to “put some skin in the game”
But one study is still one study, and the reason you don’t normally rely on only a single study is because there might be hidden, nonobvious biases that skewed the results. Followup studies with different methodologies could unmask these problems, but no followup to the RAND study was ever done. It’s just too expensive.
But guess what? It turns out that there might have been a simple but devastating flaw in the RAND data. What would happen if the people who were randomly assigned to the high-copay group simply left the experiment and returned to their regular insurance plan if they got seriously sick? Answer: It would make it look like the high-copay group made fewer claims. Not because the high copay made them think twice about getting care, but simply because they dropped out of the program entirely. It appear that this is exactly what happened.
Ezra Klein has a bit more detail about this, along with some links if you’re interested in reading more. As far as I know, the RAND researchers haven’t responded to this yet, so it should be considered a tentative criticism, not necessarily a knockout blow. But it’s probably going to provoke quite an interesting wonkfest among healthcare geeks. If it turns out the RAND study was faulty, there’s a whole lot of subsequent bloviating about healthcare that’s going to turn out to be misguided too.