Long time readers of the blog know of my skepticism for “pay for performance”. That’s not because I take issue with the defining principle; it’s because I think that we are nowhere near a good way of defining “quality” or linking it to actual outcomes.
That doesn’t stop the politicians. They love to talk about how we will “pay for quality, not quantity”, as if that is something new. Why haven’t we done that before? We haven’t because it’s hard.
When it comes to this kind of work, we often act like the drunk searching for his car keys under the street light. We pick metrics that we can easily measure, not those that might actually matter. When we do that, actual outcomes don’t improve. Case in point, “The Long-Term Effect of Premier Pay for Performance on Patient Outcome“, over at the New England Journal of Medicine:
BACKGROUND: Pay for performance has become a central strategy in the drive to improve health care. We assessed the long-term effect of the Medicare Premier Hospital Quality Incentive Demonstration (HQID) on patient outcomes.
METHODS: We used Medicare data to compare outcomes between the 252 hospitals participating in the Premier HQID and 3363 control hospitals participating in public reporting alone. We examined 30-day mortality among more than 6 million patients who had acute myocardial infarction, congestive heart failure, or pneumonia or who underwent coronary-artery bypass grafting (CABG) between 2003 and 2009.
What’s the Premier HQID? Well, way back in 2003, CMS invited a bunch of hospitals to participate in a demonstration project for quality, and 252 agreed to participate. Those hospitals agreed to turn in data on 33 measures, for medical conditions like heart attacks, congestive heart failure, and pneumonia, as well as for procedures like CABGs, knee replacements, and hip replacements. Indicators were assigned to these conditions and procedures, and they were then used to measure “quality”. Those hospitals that did well could get 1-2% Medicare bonuses, and later those that did poorly might suffer a 1-2% Medicare penalty. The real question, though, is whether hospitals that worked to achieve “quality” by these metrics actually made a difference in outcomes that matter. Did they?
They even did an analysis of just those hospitals that started out doing “poorly” according to the metrics. You’d think that those hospitals would have the most to gain by focusing on “quality” metrics. The results were pretty much exactly the same.
The authors’ conclusion:
We found no evidence that the largest hospital-based pay-for-performance program led to a decrease in 30-day mortality. Expectations of improved outcomes for programs modeled after Premier HQID should therefore remain modest.
[Cross-posted at The Incidental Economist]