A hallmark of leading business management and public policy design today is an increased reliance on measuring results. If you don’t track your performance, you can’t tell whether you’re improving, and you have no reliable way to know if your improvement strategies are having the desired effects. Resistance to measurement can often reflect reluctance to face up to the need for sometimes difficult, but vitally important, institutional change.
Measuring outcomes badly or incompletely, however, can bring risks and pitfalls of its own. Getting measurement wrong, whether because it is too narrow or too loosely connected to the outcome you really care about, can lead to disappointment or worse. This problem can be exacerbated when rewards or punishments are connected to performance on the measures you are using.
In the early days of measuring hospital performance, hospitals were sometimes ranked according to the mortality rates of their patients. Hospital administrators quickly realized that an effective way to lower their mortality rates was simply to reduce the number of severely ill patients they admitted and increase the number in less dire condition. This was hardly the response sought from publicizing hospital mortality rates. Policy makers quickly realized that the smart thing to do was to adjust the mortality rates by the severity of the patient’s condition. A hospital that admits the most challenging late-stage cancer patients with complicating additional conditions will of course have a much higher mortality rate than a hospital that treats largely individuals with early stage cancers and few accompanying problems. The right comparisons are between the survival rates of hospitals whose patients are about equally ill or between hospitals’ risk-adjusted survival rates.
An analogous problem arises in higher education. Most Americans would agree that the nation would benefit if more of our young people graduated from college.
Some have suggested that federal and state policy should provide financial incentives for colleges that improve their graduation rates. But if we reward colleges for improving their graduation rates, college administrators may respond by simply reducing admissions of students who face significant academic challenges. If all colleges were to follow such a policy, they all might wind up with higher graduation rates, but the total number of students graduating would be smaller and many young people would be denied an opportunity for economic and social mobility.
What we need to do is to adjust the graduation rate to account for the types of students entering a particular college. Of course, a college where the vast majority of students drop out, like a hospital where most admitted patients die, is in need of fundamental improvement; modest improvements in its graduation rate wouldn’t be enough to make the college viable.
Another risk in rewarding colleges for graduating more students is that such a policy may induce them to lower their standards. Most colleges have strong internal checks and balances to guard against that response, but nonetheless there is a real risk of erosion over time. This is a genuinely challenging problem to deal with, but measures ranging from changes in the average earnings of the college’s graduates and admission rates to graduate school can be used to monitor and address this risk .
Policies are clearly needed that make colleges, as well as students, take responsibility for dropouts. To do this we will need to measure more, and develop better measures. We also will need to be smart and innovative in determining where and how best to apply pressure for accountability. But as we go about this important work, we need to observe the same core principle that guides health care itself: “First, do no harm.”