Brief Reports

Physician predictions of length of stay of patients admitted with heart failure

Abstract

Physicians' ability to predict length of stay is understudied, particularly for patients with heart failure (HF) admissions. The objective of this prospective, observational cohort study was to measure the accuracy of inpatient physicians' predictions of length of stay at the time of admission of patients admitted to an academic tertiary care hospital with HF and to determine whether level of experience improves accuracy. The patients included 165 adults consecutively admitted with heart failure, about whom 415 predictions were made within 24 hours of admission. Mean and median lengths of stay were 10.9 and 8 days, respectively. The mean difference between predicted and actual length of stay was statistically significant for all groups: interns, −5.9 days (95% confidence interval [CI]: −8.2 to −3.6, P < 0.0001); residents, −4.3 days (95% CI: −6.0 to −2.7, P = 0.0001); attending cardiologists, −3.5 days (95% CI: −5.1 to −2.0, P < 0.0001). There were no differences in accuracy by level of experience (P = 0.61). Physicians, regardless of experience, underestimate length of stay of patients admitted with HF. Journal of Hospital Medicine 2016;11:642–645. © 2016 Society of Hospital Medicine

© 2016 Society of Hospital Medicine

Heart failure is a frequent cause of hospital admission in the United States, with an estimated cost of $31 billion dollars per year.[1] Discharging a patient with heart failure requires a multidisciplinary approach that includes anticipating a discharge date, scheduling follow‐up, reconciling medications, assessing home‐care or placement needs, and delivering patient education.[2, 3] Comprehensive transitional care interventions reduce readmissions and mortality.[2] Individually tailored and structured discharge plans decrease length of stay and readmissions.[3] The Centers for Medicare and Medicaid Services recently proposed that discharge planning begin within 24 hours of inpatient admissions,[4] despite inadequate data surrounding the optimal time to begin discharge planning.[3] In addition to enabling transitional care, identifying patients vulnerable to extended hospitalization aids in risk stratification, as prolonged length of stay is associated with increased risk of readmission and mortality.[5, 6]

Physicians are not able to accurately prognosticate whether patients will experience short‐term outcomes such as readmissions or mortality.[7, 8] Likewise, physicians do not predict length of stay accurately for heterogeneous patient populations,[9, 10, 11] even on the morning prior to anticipated discharge.[12] Prediction accuracy for patients admitted with heart failure, however, has not been adequately studied. The objectives of this study were to measure the accuracy of inpatient physicians' early predictions of length of stay for patients admitted with heart failure and to determine whether level of experience improved accuracy.

METHODS

In this prospective, observational study, we measured physicians' predictions of length of stay for patients admitted to a heart failure teaching service at an academic tertiary care hospital. Three resident/emntern teams rotate admitting responsibilities every 3 days, supervised by 1 attending cardiologist. Patients admitted overnight may be admitted independently by the on‐call resident without intern collaboration.

All physicians staffing our center's heart failure teaching service between August 1, 2013 and November 19, 2013 were recruited, and consecutively admitted adult patients were included. Patients were excluded if they did not have any cardiac diagnosis or if still admitted at study completion in February 2014. Deceased patients' time of death was counted as discharge.

Interns, residents, and attending cardiologists were interviewed independently within 24 hours of admission and asked to predict length of stay. Interns and residents were interviewed prior to rounds, and attendings thereafter. Electronic medical records were reviewed to determine date and time of admission and discharge, demographics, clinical variables, and discharge diagnoses.

The primary outcome was accuracy of predictions of length of stay stratified by level of experience. Based on prior pilot data, at 80% power and significance level () of 0.05, we estimated that predictions were needed on 100 patients to detect a 2‐day difference between actual and predicted length of stay.

Student t tests were used to compare the difference between predicted and actual length of stay for each level of training. Analysis of variance (ANOVA) was used to compare accuracy of prediction by training level. Generalized estimating equation (GEE) modeling was applied to compare predictions among interns, residents, and attending cardiologists, accounting for clustering by individual physician. GEE models were adjusted for study week in a sensitivity analysis to determine if predictions improved over time.

Analysis was performed using SAS 9.3 (SAS Institute Inc., Cary, NC) and R 2.14 (The R Foundation for Statistical Computing, Vienna, Austria). Institutional review board approval was granted, and physicians provided informed consent. All authors had access to primary data devoid of protected health information.

RESULTS

In total, 22 interns (<6 months experience), 25 residents (13 years experience), and 8 attending cardiologists (mean 19 9.7 years experience) were studied. Predictions were performed on 171 consecutively admitted patients. Five patients had noncardiac diagnoses and 1 patient remained admitted, leaving 165 patients for analysis. Predictions were made by all 3 physician levels on 98 patients. There were 67 patients with incomplete predictions as a result of 63 intern, 13 attending, and 4 resident predictions that were unobtainable. Absent intern data predominantly resulted from night shift admissions. Remaining missing data were due to time‐sensitive physician tasks that interfered with physician interviews.

Patient characteristics are described in Table 1. Physicians provided 415 predictions on 165 patients, 157 (95%) of whom survived to hospital discharge. Mean and median lengths of stay were 10.9 and 8 days (interquartile range [IQR], 4 to 13). Mean intern (N = 102), resident (N = 161), and attending (N = 152) predictions were 5.4 days (95% confidence interval [CI]: 4.6 to 6.2), 6.6 days (95% CI: 5.8 to 7.4) and 7.2 days (95% CI: 6.4 to 7.9), respectively. Median intern, resident, and attending predictions were 5 days (IQR, 3 to 7), 5 days (IQR, 3 to 7), and 6 days (IQR, 4 to 10). Mean differences between predicted and actual length of stay for interns, residents and attendings were 9 days (95% CI: 8.2 to 3.6), 4.3 days (95% C: 6.0 to 2.7), and 3.5 days (95% CI: 5.1 to 2.0). The mean difference between predicted and actual length of stay was statistically significant for all groups (P < 0.0001). Median intern, resident, and attending differences between predicted and actual were 2 days (IQR, 7 to 0), 2 days (IQR, 7 to 0), and 1 day (IQR, 5 to 1), respectively. Predictions correlated poorly with actual length of stay (R2 = 0.11).

Patient Characteristics
Patients, N = 165 (%)
  • NOTE: Patient characteristics are for all included patients. Percentages may not add up to 100% due to rounding. Abbreviations: ADLS, Activities of Daily Living; EF, ejection fraction; HF, heart failure; IADLS, Instrumental Activities of Daily Living; NYHA, New York Heart Association. *Patients with heart transplants were categorized unknown if no NYHA class was documented.

Male 105 (63%)
Age 57 16 years
White 99 (60%)
Black 52 (31%)
Asian, Hispanic, other, unknown 16 (9%)
HF classification
HF with a reduced EF (EF 40%) 106(64%)
HF mixed/undefined (EF 41%49%) 14 (8%)
HF with a preserved EF (EF 50%) 20 (12%)
Right heart failure only 5 (3%)
Heart transplant cardiac complications 20 (12%)
Severity of illness on admission
NYHA class I 9 (5%)
NYHA class II 25 (15%)
NYHA class III 67 (41%)
NYHA class IV 32 (19%)
NYHA class unknown* 32 (19%)
Mean no. of home medications prior to admission 13 6
On intravenous inotropes prior to admission 18 (11%)
On mechanical circulatory support prior to admission 15 (9%)
Status postheart transplant 20 (12%)
Invasive hemodynamic monitoring within 24 hours 94 (57%)
Type of admission
Admitted through emergency department 71 (43%)
Admitted from clinic 35 (21%)
Transferred from other acute care hospitals 56 (34%)
Admitted from skilled nursing or rehabilitation facility 3 (2%)
Social history
Lived alone prior to admission 32 (19%)
Prison/homeless/facility/unknown living situation 8 (5%)
Required assistance for IADLS/ADLS prior to admission 29 (17%)
Home health services initiated prior to admission 42 (25%)
Prior admission history
No known admissions in the prior year 70 (42%)
1 admission in the prior year 37 (22%)
2 admissions in the prior year 21 (13%)
310 admissions in the prior year 36 (22%)
Unknown readmission status 1 (1%)
Readmitted patients
Readmitted within 30 days 38 (23%)
Readmitted within 7 days 13 (8%)

Ninety‐eight patients (59%) received predictions from physicians at all 3 experience levels. Mean and median lengths of stay were 11.3 days and 7.5 days (IQR, 4 to 13). Concordant with the entire cohort, median intern, resident, and attending predictions for these patients were 5 days (IQR, 3 to 7), 5 days (IQR, 3 to 7), and 6 days (IQR, 4 to 10), respectively. Differences between predicted and actual length of stay were statistically significant for all groups: the mean difference for interns, residents, and attendings was 5.8 days (95% CI: 8.2 to 3.4, P < 0.0001), 4.6 days (95% CI: 7.1 to 2.0, P = 0.0001), and 4.3 days (95% CI: 6.5 to 2.1, P = 0.0003), respectively (Figure 1).

Figure 1

Actual length of stay versus physicians' predictions (n = 98). Mean LOS (days) of all patients for whom there was a prediction made by all 3 physicians on the team. Predictions were significantly less than actual LOS for interns, residents, and attending cardiologists (P < 0.0001, P = 0.0001, P = 0.0003, respectively). There were no significant differences among predictions made by interns, residents, and attending cardiologists (P = 0.61). Abbreviations: LOS, length of stay.

There are differences among providers with improved prediction as level of experience increased, but this is not statistically significant as determined by ANOVA (p=0.64) or by GEE modeling to account for clustering of predictions by physician (P = 0.61). Analysis that adjusted for study week yielded similar results. Thus, experience did not improve accuracy.

DISCUSSION

We prospectively measured accuracy of physicians' length of stay predictions of heart failure patients and compared accuracy by experience level. All physicians underestimated length of stay, with average differences between 3.5 and 6 days. Most notably, level of experience did not improve accuracy. Although we anticipated that experience would improve prediction, our findings are not compatible with this hypothesis. Future studies of factors affecting length of stay predictions would help to better understand our findings.

Our results are consistent with small, single‐center studies of different patient and physician cohorts. Hulter Asberg found that internists at a hospital were unable to predict whether a patient would remain admitted 10 days or more, with poor interobserver reliability.[9] Mak et al. demonstrated that emergency physicians underestimated length of stay by an average of 2 days when predicting length of stay on a broad spectrum of patients in an emergency department.[10] Physician predictions of length of stay have been found to be inaccurate in a center's oncologic intensive care unit population.[11] Sullivan et al. found that academic general medicine physicians predicted discharge with 27% sensitivity the morning prior to next‐day discharge, which improved significantly to 67% by the afternoon, concluding that physicians can provide meaningful discharge predictions the afternoon prior to next‐day discharge.[12] By focusing on patients with heart failure, a major driver of hospitalization and readmission, and comparing providers by level of experience, we augment this existing body of work.

In addition to identifying patients at risk for readmission and mortality,[5, 6] accurate discharge prediction may improve safety of weekend discharges and patient satisfaction. Heart failure patients discharged on weekends receive less complete discharge instructions,[13] suffer higher mortality, and are readmitted more frequently than those discharged on weekdays.[14] Early and accurate predictions may enhance interventions targeting patients with anticipated weekend discharges. Furthermore, inadequate communication regarding anticipated discharge timing is a source of patient dissatisfaction,[15] and accurate prediction of discharge, if shared with patients, may improve patient satisfaction.

Limitations of our study include that it was a single‐center study at a large academic tertiary care hospital with predictions assessed on a teaching service. Severity of illness of this cohort may be a barrier to generalizability, and physicians may predict prognosis of healthier patients more accurately. We recorded predictions at the time of admission, and did not assess whether accuracy improved closer to discharge. We did not collect predictions from non‐physician team members. Sample size and absent data regarding the causes of prolonged hospitalization prohibited an analyses of variables associated with prediction inaccuracy.

CONCLUSIONS

Physicians do not accurately forecast heart failure patients' length of stay at the time of admission, and level of experience does not improve accuracy. Future studies are warranted to determine whether predictions closer to discharge, by an interdisciplinary team, or with assistance of risk‐prediction models are more accurate than physician predictions at admission, and whether early identification of patients at risk for prolonged hospitalization improves outcomes. Ultimately, early and accurate length of stay forecasts may improve risk stratification, patient satisfaction, and discharge planning, and reduce adverse outcomes related to at‐risk discharges.

Acknowledgements

The authors acknowledge Katherine R Courtright, MD, for her gracious assistance with statistical analysis.

Disclosure: Nothing to report

References

Online-Only Materials

   Comments ()