The past several years have seen a dramatic increase in the percentage of patients cared for by hospitalists, yet an emerging body of literature examining the association between care given by hospitalists and performance on a number of process measures has shown mixed results. Hospitalists do not appear to provide higher quality of care for pneumonia,1, 2 while results in heart failure are mixed.35 Each of these studies was conducted at a single site, and examined patient‐level effects. More recently, Vasilevskis et al6 assessed the association between the intensity of hospitalist use (measured as the percentage of patients admitted by hospitalists) and performance on process measures. In a cohort of 208 California hospitals, they found a significant improvement in performance on process measures in patients with acute myocardial infarction, heart failure, and pneumonia with increasing percentages of patients admitted by hospitalists.6
To date, no study has examined the association between the use of hospitalists and the publicly reported 30‐day mortality and readmission measures. Specifically, the Centers for Medicare and Medicaid Services (CMS) have developed and now publicly report risk‐standardized 30‐day mortality (RSMR) and readmission rates (RSRR) for Medicare patients hospitalized for 3 common and costly conditionsacute myocardial infarction (AMI), heart failure (HF), and pneumonia.7 Performance on these hospital‐based quality measures varies widely, and vary by hospital volume, ownership status, teaching status, and nurse staffing levels.813 However, even accounting for these characteristics leaves much of the variation in outcomes unexplained. We hypothesized that the presence of hospitalists within a hospital would be associated with higher performance on 30‐day mortality and 30‐day readmission measures for AMI, HF, and pneumonia. We further hypothesized that for hospitals using hospitalists, there would be a positive correlation between increasing percentage of patients admitted by hospitalists and performance on outcome measures. To test these hypotheses, we conducted a national survey of hospitalist leaders, linking data from survey responses to data on publicly reported outcome measures for AMI, HF, and pneumonia.
MATERIALS AND METHODS
Of the 4289 hospitals in operation in 2008, 1945 had 25 or more AMI discharges. We identified hospitals using American Hospital Association (AHA) data, calling hospitals up to 6 times each until we reached our target sample size of 600. Using this methodology, we contacted 1558 hospitals of a possible 1920 with AHA data; of the 1558 called, 598 provided survey results.
Our survey was adapted from the survey developed by Vasilevskis et al.6 The entire survey can be found in the Appendix (see Supporting Information in the online version of this article). Our key questions were: 1) Does your hospital have at least 1 hospitalist program or group? 2) Approximately what percentage of all medical patients in your hospital are admitted by hospitalists? The latter question was intended as an approximation of the intensity of hospitalist use, and has been used in prior studies.6, 14 A more direct measure was not feasible given the complexity of obtaining admission data for such a large and diverse set of hospitals. Respondents were also asked about hospitalist care of AMI, HF, and pneumonia patients. Given the low likelihood of precise estimation of hospitalist participation in care for specific conditions, the response choices were divided into percentage quartiles: 025, 2650, 5175, and 76100. Finally, participants were asked a number of questions regarding hospitalist organizational and clinical characteristics.
We obtained data regarding presence or absence of hospitalists and characteristics of the hospitalist services via phone‐ and fax‐administered survey (see Supporting Information, Appendix, in the online version of this article). Telephone and faxed surveys were administered between February 2010 and January 2011. Hospital telephone numbers were obtained from the 2008 AHA survey database and from a review of each hospital's website. Up to 6 attempts were made to obtain a completed survey from nonrespondents unless participation was specifically refused. Potential respondents were contacted in the following order: hospital medicine department leaders, hospital medicine clinical managers, vice president for medical affairs, chief medical officers, and other hospital executives with knowledge of the hospital medicine services. All respondents agreed with a question asking whether they had direct working knowledge of their hospital medicine services; contacts who said they did not have working knowledge of their hospital medicine services were asked to refer our surveyor to the appropriate person at their site. Absence of a hospitalist program was confirmed by contacting the Medical Staff Office.
Hospital Organizational and Patient‐Mix Characteristics
Hospital‐level organizational characteristics (eg, bed size, teaching status) and patient‐mix characteristics (eg, Medicare and Medicaid inpatient days) were obtained from the 2008 AHA survey database.
Outcome Performance Measures
The 30‐day risk‐standardized mortality and readmission rates (RSMR and RSRR) for 2008 for AMI, HF, and pneumonia were calculated for all admissions for people age 65 and over with traditional fee‐for‐service Medicare. Beneficiaries had to be enrolled for 12 months prior to their hospitalization for any of the 3 conditions, and had to have complete claims data available for that 12‐month period.7 These 6 outcome measures were constructed using hierarchical generalized linear models.1520 Using the RSMR for AMI as an example, for each hospital, the measure is estimated by dividing the predicted number of deaths within 30 days of admission for AMI by the expected number of deaths within 30 days of admission for AMI. This ratio is then divided by the national unadjusted 30‐day mortality rate for AMI, which is obtained using data on deaths from the Medicare beneficiary denominator file. Each measure is adjusted for patient characteristics such as age, gender, and comorbidities. All 6 measures are endorsed by the National Quality Forum (NQF) and are reported publicly by CMS on the Hospital Compare web site.
Comparison of hospital‐ and patient‐level characteristics between hospitals with and without hospitalists was performed using chi‐square tests and Student t tests.
The primary outcome variables are the RSMRs and RSRRs for AMI, HF, and pneumonia. Multivariable linear regression models were used to assess the relationship between hospitals with at least 1 hospitalist group and each dependent variable. Models were adjusted for variables previously reported to be associated with quality of care. Hospital‐level characteristics included core‐based statistical area, teaching status, number of beds, region, safety‐net status, nursing staff ratio (number of registered nurse FTEs/number of hospital FTEs), and presence or absence of cardiac catheterization and coronary bypass capability. Patient‐level characteristics included Medicare and Medicaid inpatient days as a percentage of total inpatient days and percentage of admissions by race (black vs non‐black). The presence of hospitalists was correlated with each of the hospital and patient‐level characteristics. Further analyses of the subset of hospitals that use hospitalists included construction of multivariable linear regression models to assess the relationship between the percentage of patients admitted by hospitalists and the dependent variables. Models were adjusted for the same patient‐ and hospital‐level characteristics.
The institutional review boards at Yale University and University of California, San Francisco approved the study. All analyses were performed using Statistical Analysis Software (SAS) version 9.1 (SAS Institute, Inc, Cary, NC).
Characteristics of Participating Hospitals
Telephone, fax, and e‐mail surveys were attempted with 1558 hospitals; we received 598 completed surveys for a response rate of 40%. There was no difference between responders and nonresponders on any of the 6 outcome variables, the number of Medicare or Medicaid inpatient days, and the percentage of admissions by race. Responders and nonresponders were also similar in size, ownership, safety‐net and teaching status, nursing staff ratio, presence of cardiac catheterization and coronary bypass capability, and core‐based statistical area. They differed only on region of the country, where hospitals in the northwest Central and Pacific regions of the country had larger overall proportions of respondents. All hospitals provided information about the presence or absence of hospitalist programs. The majority of respondents were hospitalist clinical or administrative managers (n = 220) followed by hospitalist leaders (n = 106), other executives (n = 58), vice presidents for medical affairs (n = 39), and chief medical officers (n = 15). Each respondent indicated a working knowledge of their site's hospitalist utilization and practice characteristics. Absence of hospitalist utilization was confirmed by contact with the Medical Staff Office.
Comparisons of Sites With Hospitalists and Those Without Hospitalists
Hospitals with and without hospitalists differed by a number of organizational characteristics (Table 1). Sites with hospitalists were more likely to be larger, nonprofit teaching hospitals, located in metropolitan regions, and have cardiac surgical services. There was no difference in the hospitals' safety‐net status or RN staffing ratio. Hospitals with hospitalists admitted lower percentages of black patients.
|Hospitalist Program||No Hospitalist Program|
|N = 429||N = 169|
|N (%)||N (%)||P Value|
|Core‐based statistical area||<0.0001|
|Division||94 (21.9%)||53 (31.4%)|
|Metro||275 (64.1%)||72 (42.6%)|
|Micro||52 (12.1%)||38 (22.5%)|
|Rural||8 (1.9%)||6 (3.6%)|
|Public||47 (11.0%)||20 (11.8%)|
|Nonprofit||333 (77.6%)||108 (63.9%)|
|Private||49 (11.4%)||41 (24.3%)|
|COTH||54 (12.6%)||7 (4.1%)|
|Teaching||110 (25.6%)||26 (15.4%)|
|Other||265 (61.8%)||136 (80.5%)|
|CABG||286 (66.7%)||86 (50.9%)|
|CATH||79 (18.4%)||36 (21.3%)|
|Other||64 (14.9%)||47 (27.8%)|
|New England||35 (8.2%)||3 (1.8%)|
|Middle Atlantic||60 (14.0%)||29 (17.2%)|
|South Atlantic||78 (18.2%)||23 (13.6%)|
|NE Central||60 (14.0%)||35 (20.7%)|
|SE Central||31 (7.2%)||10 (5.9%)|
|NW Central||38 (8.9%)||23 (13.6%)|
|SW Central||41 (9.6%)||21 (12.4%)|
|Mountain||22 (5.1%)||3 (1.8%)|
|Pacific||64 (14.9%)||22 (13.0%)|
|Yes||72 (16.8%)||32 (18.9%)|
|No||357 (83.2%)||137 (81.1%)|
|Mean (SD)||Mean (SD)||P value|
|RN staffing ratio (n = 455)||27.3 (17.0)||26.1 (7.6)||0.28|
|Total beds||315.0 (216.6)||214.8 (136.0)||<0.0001|
|% Medicare inpatient days||47.2 (42)||49.7 (41)||0.19|
|% Medicaid inpatient days||18.5 (28)||21.4 (46)||0.16|
|% Black||7.6 (9.6)||10.6 (17.4)||0.03|
Characteristics of Hospitalist Programs and Responsibilities
Of the 429 sites reporting use of hospitalists, the median percentage of patients admitted by hospitalists was 60%, with an interquartile range (IQR) of 35% to 80%. The median number of full‐time equivalent hospitalists per hospital was 8 with an IQR of 5 to 14. The IQR reflects the middle 50% of the distribution of responses, and is not affected by outliers or extreme values. Additional characteristics of hospitalist programs can be found in Table 2. The estimated percentage of patients with AMI, HF, and pneumonia cared for by hospitalists varied considerably, with fewer patients with AMI and more patients with pneumonia under hospitalist care. Overall, a majority of hospitalist groups provided the following services: care of critical care patients, emergency department admission screening, observation unit coverage, coverage for cardiac arrests and rapid response teams, quality improvement or utilization review activities, development of hospital practice guidelines, and participation in implementation of major hospital system projects (such as implementation of an electronic health record system).
|Date program established|
|No. of hospitalist FTEs|
|Median (IQR)||8 (5, 14)|
|Percent of medical patients admitted by hospitalists|
|Median (IQR)||60% (35, 80)|
|No. of hospitalists groups|
|Don't know||6 (1.4%)|
|Employment of hospitalists (not mutually exclusive)|
|Hospital system||98 (22.8%)|
|Local physician practice group||62 (14.5%)|
|Hospitalist physician practice group (local)||83 (19.3%)|
|Hospitalist physician practice group (national/regional)||36 (8.4%)|
|Any 24‐hr in‐house coverage by hospitalists|
|No. of hospitalist international medical graduates|
|Median (IQR)||3 (1, 6)|
|No. of hospitalists that are <1 yr out of residency|
|Median (IQR)||1 (0, 2)|
|Percent of patients with AMI cared for by hospitalists|
|Don't know||110 (25.6%)|
|Percent of patients with heart failure cared for by hospitalists|
|Don't know||113 (26.3%)|
|Percent of patients with pneumonia cared for by hospitalists|
|Don't know||105 (24.5%)|
|Hospitalist provision of services|
|Care of critical care patients|
|Hospitalists provide service||346 (80.7%)|
|Hospitalists do not provide service||80 (18.7%)|
|Don't know||3 (0.7%)|
|Emergency department admission screening|
|Hospitalists provide service||281 (65.5%)|
|Hospitalists do not provide service||143 (33.3%)|
|Don't know||5 (1.2%)|
|Observation unit coverage|
|Hospitalists provide service||359 (83.7%)|
|Hospitalists do not provide service||64 (14.9%)|
|Don't know||6 (1.4%)|
|Emergency department coverage|
|Hospitalists provide service||145 (33.8%)|
|Hospitalists do not provide service||280 (65.3%)|
|Don't know||4 (0.9%)|
|Coverage for cardiac arrests|
|Hospitalists provide service||283 (66.0%)|
|Hospitalists do not provide service||135 (31.5%)|
|Don't know||11 (2.6%)|
|Rapid response team coverage|
|Hospitalists provide service||240 (55.9%)|
|Hospitalists do not provide service||168 (39.2%)|
|Don't know||21 (4.9%)|
|Quality improvement or utilization review|
|Hospitalists provide service||376 (87.7%)|
|Hospitalists do not provide service||37 (8.6%)|
|Don't know||16 (3.7%)|
|Hospital practice guideline development|
|Hospitalists provide service||339 (79.0%)|
|Hospitalists do not provide service||55 (12.8%)|
|Don't know||35 (8.2%)|
|Implementation of major hospital system projects|
|Hospitalists provide service||309 (72.0%)|
|Hospitalists do not provide service||96 (22.4%)|
|Don't know||24 (5.6%)|
Relationship Between Hospitalist Utilization and Outcomes
Tables 3 and 4 show the comparisons between hospitals with and without hospitalists on each of the 6 outcome measures. In the bivariate analysis (Table 3), there was no statistically significant difference between groups on any of the outcome measures with the exception of the risk‐stratified readmission rate for heart failure. Sites with hospitalists had a lower RSRR for HF than sites without hospitalists (24.7% vs 25.4%, P < 0.0001). These results were similar in the multivariable models as seen in Table 4, in which the beta estimate (slope) was not significantly different for hospitals utilizing hospitalists compared to those that did not, on all measures except the RSRR for HF. For the subset of hospitals that used hospitalists, there was no statistically significant change in any of the 6 outcome measures, with increasing percentage of patients admitted by hospitalists. Table 5 demonstrates that for each RSMR and RSRR, the slope did not consistently increase or decrease with incrementally higher percentages of patients admitted by hospitalists, and the confidence intervals for all estimates crossed zero.
|Hospitalist Program||No Hospitalist Program|
|N = 429||N = 169|
|Outcome Measure||Mean % (SD)||Mean (SD)||P Value|
|MI RSMR||16.0 (1.6)||16.1 (1.5)||0.56|
|MI RSRR||19.9 (0.88)||20.0 (0.86)||0.16|
|HF RSMR||11.3 (1.4)||11.3 (1.4)||0.77|
|HF RSRR||24.7 (1.6)||25.4 (1.8)||<0.0001|
|Pneumonia RSMR||11.7 (1.7)||12.0 (1.7)||0.08|
|Pneumonia RSRR||18.2 (1.2)||18.3 (1.1)||0.28|
|Adjusted beta estimate (95% CI)|
|Hospitalist||0.001 (0.002, 004)|
|Hospitalist||0.001 (0.002, 0.001)|
|Hospitalist||0.0004 (0.002, 0.003)|
|Hospitalist||0.006 (0.009, 0.003)|
|Hospitalist||0.002 (0.005, 0.001)|
|Hospitalist||0.00001 (0.002, 0.002)|
|Adjusted Beta Estimate (95% CI)|
|0%30%||0.003 (0.007, 0.002)|
|32%48%||0.001 (0.005, 0.006)|
|70%80%||0.004 (0.001, 0.009)|
|85%||0.004 (0.009, 0.001)|
|0%30%||0.001 (0.002, 0.004)|
|32%48%||0.001 (0.004, 0.004)|
|70%80%||0.001 (0.002, 0.004)|
|85%||0.001 (0.002, 0.004)|
|0%30%||0.001 (0.005, 0.003)|
|32%48%||0.002 (0.007, 0.003)|
|70%80%||0.002 (0.006, 0.002)|
|85%||0.001 (0.004, 0.005)|
|0%30%||0.002 (0.004, 0.007)|
|32%48%||0.0003 (0.005, 0.006)|
|70%80%||0.001 (0.005, 0.004)|
|85%||0.002 (0.007, 0.003)|
|0%30%||0.001 (0.004, 0.006)|
|32%48%||0.00001 (0.006, 0.006)|
|70%80%||0.001 (0.004, 0.006)|
|85%||0.001 (0.006, 0.005)|
|0%30%||0.0002 (0.004, 0.003)|
|32%48%||0.004 (0.0003, 0.008)|
|70%80%||0.001 (0.003, 0.004)|
|85%||0.002 (0.002, 0.006)|
In this national survey of hospitals, we did not find a significant association between the use of hospitalists and hospitals' performance on 30‐day mortality or readmissions measures for AMI, HF, or pneumonia. While there was a statistically lower 30‐day risk‐standardized readmission rate measure for the heart failure measure among hospitals that use hospitalists, the effect size was small. The survey response rate of 40% is comparable to other surveys of physicians and other healthcare personnel, however, there were no significant differences between responders and nonresponders, so the potential for response bias, while present, is small.
Contrary to the findings of a recent study,21 we did not find a higher readmission rate for any of the 3 conditions in hospitals with hospitalist programs. One advantage of our study is the use of more robust risk‐adjustment methods. Our study used NQF‐endorsed risk‐standardized measures of readmission, which capture readmissions to any hospital for common, high priority conditions where the impact of care coordination and discontinuity of care are paramount. The models use administrative claims data, but have been validated by medical record data. Another advantage is that our study focused on a time period when hospital readmissions were a standard quality benchmark and increasing priority for hospitals, hospitalists, and community‐based care delivery systems. While our study is not able to discern whether patients had primary care physicians or the reason for admission to a hospitalist's care, our data do suggest that hospitalists continue to care for a large percentage of hospitalized patients. Moreover, increasing the proportion of patients being admitted to hospitalists did not affect the risk for readmission, providing perhaps reassuring evidence (or lack of proof) for a direct association between use of hospitalist systems and higher risk for readmission.
While hospitals with hospitalists clearly did not have better mortality or readmission rates, an alternate viewpoint might hold that, despite concerns that hospitalists negatively impact care continuity, our data do not demonstrate an association between readmission rates and use of hospitalist services. It is possible that hospitals that have hospitalists may have more ability to invest in hospital‐based systems of care,22 an association which may incorporate any hospitalist effect, but our results were robust even after testing whether adjustment for hospital factors (such as profit status, size) affected our results.
It is also possible that secular trends in hospitals or hospitalist systems affected our results. A handful of single‐site studies carried out soon after the hospitalist model's earliest descriptions found a reduction in mortality and readmission rates with the implementation of a hospitalist program.2325 Alternatively, it may be that there has been a dilution of the effect of hospitalists as often occurs when any new innovation is spread from early adopter sites to routine practice. Consistent with other multicenter studies from recent eras,21, 26 our article's findings do not demonstrate an association between hospitalists and improved outcomes. Unlike other multicenter studies, we had access to disease‐specific risk‐adjustment methodologies, which may partially account for referral biases related to patient‐specific measures of acute or chronic illness severity.
Changes in the hospitalist effect over time have a number of explanations, some of which are relevant to our study. Recent evidence suggests that complex organizational characteristics, such as organizational values and goals, may contribute to performance on 30‐day mortality for AMI rather than specific processes and protocols27; intense focus on AMI as a quality improvement target is emblematic of a number of national initiatives that may have affected our results. Interestingly, hospitalist systems have changed over time as well. Early in the hospitalist movement, hospitalist systems were implemented largely at the behest of hospitals trying to reduce costs. In recent years, however, hospitalist systems are at least as frequently being implemented because outpatient‐based physicians or surgeons request hospitalists; hospitalists have been focused on care of uncoveredpatients, since the model's earliest description. In addition, some hospitals invest in hospitalist programs based on perceived ability of hospitalists to improve quality and achieve better patient outcomes in an era of payment increasingly being linked to quality of care metrics.
Our study has several limitations, six of which are noted here. First, while the hospitalist model has been widely embraced in the adult medicine field, in the absence of board certification, there is no gold standard definition of a hospitalist. It is therefore possible that some respondents may have represented groups that were identified incorrectly as hospitalists. Second, the data for the primary independent variable of interest was based upon self‐report and, therefore, subject to recall bias and potential misclassification of results. Respondents were not aware of our hypothesis, so the bias should not have been in one particular direction. Third, the data for the outcome variables are from 2008. They may, therefore, not reflect organizational enhancements related to use of hospitalists that are in process, and take years to yield downstream improvements on performance metrics. In addition, of the 429 hospitals that have hospitalist programs, 46 programs were initiated after 2008. While national performance on the 6 outcome variables has been relatively static over time,7 any significant change in hospital performance on these metrics since 2008 could suggest an overestimation or underestimation of the effect of hospitalist programs on patient outcomes. Fourth, we were not able to adjust for additional hospital or health system level characteristics that may be associated with hospitalist use or patient outcomes. Fifth, our regression models had significant collinearity, in that the presence of hospitalists was correlated with each of the covariates. However, this finding would indicate that our estimates may be overly conservative and could have contributed to our nonsignificant findings. Finally, outcomes for 2 of the 3 clinical conditions measured are ones for which hospitalists may less frequently provide care: acute myocardial infarction and heart failure. Outcome measures more relevant for hospitalists may be all‐condition, all‐cause, 30‐day mortality and readmission.
This work adds to the growing body of literature examining the impact of hospitalists on quality of care. To our knowledge, it is the first study to assess the association between hospitalist use and performance on outcome metrics at a national level. While our findings suggest that use of hospitalists alone may not lead to improved performance on outcome measures, a parallel body of research is emerging implicating broader system and organizational factors as key to high performance on outcome measures. It is likely that multiple factors contribute to performance on outcome measures, including type and mix of hospital personnel, patient care processes and workflow, and system level attributes. Comparative effectiveness and implementation research that assess the contextual factors and interventions that lead to successful system improvement and better performance is increasingly needed. It is unlikely that a single factor, such as hospitalist use, will significantly impact 30‐day mortality or readmission and, therefore, multifactorial interventions are likely required. In addition, hospitalist use is a complex intervention as the structure, processes, training, experience, role in the hospital system, and other factors (including quality of hospitalists or the hospitalist program) vary across programs. Rather than focusing on the volume of care delivered by hospitalists, hospitals will likely need to support hospital medicine programs that have the time and expertise to devote to improving the quality and value of care delivered across the hospital system. This study highlights that interventions leading to improvement on core outcome measures are more complex than simply having a hospital medicine program.
The authors acknowledge Judy Maselli, MPH, Division of General Internal Medicine, Department of Medicine, University of California, San Francisco, for her assistance with statistical analyses and preparation of tables.
Disclosures: Work on this project was supported by the Robert Wood Johnson Clinical Scholars Program (K.G.); California Healthcare Foundation grant 15763 (A.D.A.); and a grant from the National Heart, Lung, and Blood Institute (NHLBI), study 1U01HL105270‐02 (H.M.K.). Dr Krumholz is the chair of the Cardiac Scientific Advisory Board for United Health and has a research grant with Medtronic through Yale University; Dr Auerbach has a grant through the National Heart, Lung, and Blood Institute (NHLBI). The authors have no other disclosures to report.