The Lown Institute Hospitals Index rankings are based on three categories of data: patient outcomes, civic leadership, and value of care. These were weighted at 50, 30, and 20 percent respectively in the final rankings. The three categories comprise seven sub-components, each of which includes more detailed measurements. The detailed measurements were rolled up into their respective components and categories to obtain a final score for
each hospital. 

Check out the complete Washington Monthly hospital rankings here.

We defined our set of hospitals based on the ability to calculate mortality rates using the Medicare Provider Analysis and Review administrative claims data set. Hospitals with fewer than 50 admissions per year were excluded. Hospitals specializing in cancer care and orthopedic or cardiac procedures were also excluded. Information on hospital characteristics was obtained from the American Hospital Association annual survey and from Medicare. We excluded hospitals that were not acute care or were located outside the 50 states and Washington, D.C. We also excluded federal hospitals, such as hospitals in the Veterans Health Administration, and hospitals where a majority of patients are covered by Medicare Advantage (Kaiser Permanente, for example). This left 3,359 hospitals, 542 of which are for-profit, 2,188 nonprofit, and 629 public.

Patient Outcomes

The patient outcomes category is made up of three components: clinical outcomes, patient safety, and patient satisfaction, which were weighted in a ratio of 5:2:1 respectively. This weighting ensured that clinical outcomes had the greatest impact on the final patient outcomes score. Clinical outcomes is composed of risk-standardized rates of mortality and readmission, weighted 4:1. Mortality includes rates of in-hospital, 30-day, 90-day, and one-year mortality. These were weighted in a ratio of 4:4:2:1 respectively in an effort to balance the effects of hospital-based care with post-discharge care and coordination in the community. For the readmission component, we equally weighted 30-day readmission rates, a standard measure of quality, and seven-day readmission, as published data suggests that the hospital-attributable component of readmission rates wanes by the seven-day mark.

Hospitalization and readmission rates were calculated after adjusting for patient risks using the Risk Stratification Index (RSI), a Lown Institute–specific version of a machine-learning algorithm in the public domain that has been validated on multiple national, state-based, and hospital-based data sets using billions of insurance claims. The RSI has been shown to predict outcomes with greater discriminatory accuracy compared with other publicly available risk-adjustment tools. 

For patient safety, we used well-established indicators such as rates of pressure ulcers, accidental punctures, and central intravenous line infections, provided by the Centers for Medicare and Medicaid Services (CMS) on its Hospital Compare website. This included the CMS composite measure (PSI 90) from 2017, which comprises 11 different measures of patient safety. It also included measurements of hospital-acquired infections. Like the CMS, we excluded critical access hospitals since almost all were missing data. The CMS was also the source of our patient satisfaction ranking. The CMS relies on the annual Hospital Consumer Assessment of Healthcare Providers and Systems survey to give a rating of patient experience across 11 variables. (For more detail and a listing of the 11 measures used, please see the Lown Institute’s methodology white paper, available on the Lown Institute’s website, and the CMS Hospital Compare website.)

Civic Leadership 

Our second category, civic leadership, comprises three components: community benefit, inclusivity, and pay equity. 

Community Benefit

For nonprofit hospitals, we used the Community Benefit Insight (CBI) data set generated from Internal Revenue Service 990 forms. We looked at the subset of community benefit spending we deemed to be meaningful: charity care (free or discounted care provided on the basis of the patient’s financial situation); subsidized health services, such as free clinics; community health improvement activities such as free immunizations; contributions to community organizations; and community-building activities, such as setting up farmers’ markets and providing housing for homeless patients. We did not use several categories of community benefit reported on 990 forms, including the following: shortfall from Medicaid and other government means-tested insurance programs (the difference between the amount Medicaid or other programs pay and the costs to hospitals for caring for such patients); health professional training (which is already largely subsidized by the federal government); and research (also heavily subsidized by the federal government). The final score for community benefit is the ratio between community spending over total hospital expenses, which were gathered from IRS data and CMS’s Healthcare Cost Report Information System (HCRIS). 

Of the categories of community benefit we deemed meaningful, charity care was the only one available across all hospital types via the HCRIS data set. It was therefore the central community benefit measure used. For nonprofits we also used other meaningful categories that were available via CBI and IRS data.

For hospitals with both HCRIS and IRS data, we weighted the results equally. Finally, we adjusted for the fact that hospitals in states where Medicaid did not expand are likely providing more charity care compared with hospitals in expansion states. We calculated the percent of gross revenue from Medicaid and added it to the final community benefit score with a weighting of 1:2 respectively. 

Inclusivity

Inclusivity is a novel metric we have developed to measure the degree to which a hospital’s patient population reflects the demographics of its catchment area. We defined catchment area by using the zip codes of the hospital’s patient population, sorted by the number of patients each zip code supplied. We then defined the radius of the catchment area as the distance to those zip codes whose contribution to the total patient population became insignificant. The median radius was 26.6 miles, with urban settings having far smaller radii than rural hospitals. We calculated the demographics by using census data on income and education as proxies for social class, and self-reported race/ethnicity for race. For each variable, inclusivity is the ratio of patients coming to the hospital compared to that measure’s prevalence in the population of people who could have come to the hospital from within its catchment area. 

To calculate the denominator, we applied the U.S. Census Bureau’s American Community Survey data for people over the age of 65 on race, income, and education levels within all zip codes that fell within the defined hospital catchment area. We calculated each rate using the total population counts. We exponentially reduced the contribution from zip codes beyond the point at which 50 percent of a hospital’s patients had come. We created the numerator of the ratio by using the demographics from zip codes of patients admitted, weighted by contribution to the total, and without a distance attenuation. We then compared the catchment area score to the hospital score to obtain an inclusivity ratio. 

Pay Equity

For pay equity, we obtained data about CEO compensation from three sources: for nonprofit hospitals, we used the IRS 990 forms; we acquired information on for-profit, publicly traded hospital systems from Securities and Exchange Commission filings; and we gleaned information about public hospital CEO pay from publicly available records. When CEO pay was unavailable, we imputed it using known values in regression models. 

We obtained average worker wages from two sources: HCRIS and the Bureau of Labor Statistics (BLS). HCRIS wage index information contains hourly wages for all employees. We included lower-wage staff, such as janitorial and medical records personnel, and excluded professional staff such as physicians and nurse practitioners, whose jobs require specialized degrees. For the 704 hospitals that had incomplete wage index information in HCRIS, we used BLS estimates of wages for health care employees within those metropolitan and non-metropolitan statistical areas. We estimated hourly wages for CEOs based on a 60-hour workweek and then calculated a ratio of CEO pay to average worker pay. For hospital systems, we distributed the system CEO salary among the constituent hospitals using the percentage of total revenue each hospital generated.

We then combined a hospital’s community value, inclusivity, and pay equity scores with equal weighting to obtain the civic leadership score. 

Value of Care

The value of care category is based on a single component measure: overuse. This includes rates of overuse for 13 low-value medical services, including: hysterectomy for benign diseases; laminectomy and/or spinal fusion without radicular pain; arthroscopy for knee arthritis; vertebroplasty or kyphoplasty for osteoporotic vertebral fractures; carotid endarterectomy in asymptomatic patients (those with no history of stroke, transient ischemic attack, or focal neurological symptoms); carotid artery imaging for syncope; EEG for syncope; head imaging for syncope; EEG for headache; inferior vena cava filter; pulmonary artery catheter placement in nonsurgical conditions; coronary artery stenting for stable coronary angina; and renal artery angioplasty or stenting. We chose these based on a substantial literature on overuse. Some of these services have been shown in high-quality clinical trials to be ineffective and are always considered overuse. Others are considered overuse whenever prescribed to patients without certain symptoms or indications.

We used the methods reported in the literature by reputable researchers to calculate rates of overuse. We used the 100 percent Medicare claims data sets to search for instances when these services were used. Hospitals without the capacity to perform a specific service were excluded from a rating for that service, and hospitals without the capacity to perform any of the 13 services were excluded entirely from the overuse ratings. We counted the number of instances of overuse by hospital for each service. We adjusted observed overuse rates to account for volume differences. We then used a statistical method called principal components analysis to reduce the data down to one variable to create an overuse score. 

Our ideas can save democracy... But we need your help! Donate Now!

Vikas Saini is a physician and president of the Lown Institute, a nonpartisan health care think tank. Shannon Brownlee is a journalist and senior vice president of the institute.