Original Research

The readmission risk flag: Using the electronic health record to automatically identify patients at risk for 30‐day readmission



Identification of patients at high risk for readmission is a crucial step toward improving care and reducing readmissions. The adoption of electronic health records (EHR) may prove important to strategies designed to risk stratify patients and introduce targeted interventions.


To develop and implement an automated prediction model integrated into our health system's EHR that identifies on admission patients at high risk for readmission within 30 days of discharge.


Retrospective and prospective cohort.


Healthcare system consisting of 3 hospitals.


All adult patients admitted from August 2009 to September 2012.


An automated readmission risk flag integrated into the EHR.


Thirty‐day all‐cause and 7‐day unplanned healthcare system readmissions.


Using retrospective data, a single risk factor, ≥2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%), with a C statistic of 0.62. Sensitivity (39%), positive predictive value (30%), proportion of patients flagged (18%), and C statistic (0.61) during the 12‐month period after implementation of the risk flag were similar. There was no evidence for an effect of the intervention on 30‐day all‐cause and 7‐day unplanned readmission rates in the 12‐month period after implementation.


An automated prediction model was effectively integrated into an existing EHR and identified patients on admission who were at risk for readmission within 30 days of discharge. Journal of Hospital Medicine 2013;8:689–695. © 2013 Society of Hospital Medicine

© 2013 Society of Hospital Medicine

Unplanned hospital readmissions are common, costly, and potentially avoidable. Approximately 20% of Medicare patients are readmitted within 30 days of discharge.[1] Readmission rates are estimated to be similarly high in other population subgroups,[2, 3, 4] with approximately 80% of patients[1, 5, 6] readmitted to the original discharging hospital. A recent systematic review suggested that 27% of readmissions may be preventable.[7]

Hospital readmissions have increasingly been viewed as a correctable marker of poor quality care and have been adopted by a number of organizations as quality indicators.[8, 9, 10] As a result, hospitals have important internal and external motivations to address readmissions. Identification of patients at high risk for readmissions may be an important first step toward preventing them. In particular, readmission risk assessment could be used to help providers target the delivery of resource‐intensive transitional care interventions[11, 12, 13, 14] to patients with the greatest needs. Such an approach is appealing because it allows hospitals to focus scarce resources where the impact may be greatest and provides a starting point for organizations struggling to develop robust models of transitional care delivery.

Electronic health records (EHRs) may prove to be an important component of strategies designed to risk stratify patients at the point of care. Algorithms integrated into the EHR that automatically generate risk predictions have the potential to (1) improve provider time efficiency by automating the prediction process, (2) improve consistency of data collection and risk score calculation, (3) increase adoption through improved usability, and (4) provide clinically important information in real‐time to all healthcare team members caring for a hospitalized patient.

We thus sought to derive a predictive model for 30‐day readmissions using data reliably present in our EHR at the time of admission, and integrate this predictive model into our hospital's EHR to create an automated prediction tool that identifies on admission patients at high risk for readmission within 30 days of discharge. In addition, we prospectively validated this model using the 12‐month period after implementation and examined the impact on readmissions.



The University of Pennsylvania Health System (UPHS) includes 3 hospitals, with a combined capacity of over 1500 beds and 70,000 annual admissions. All hospitals currently utilize Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL) as their EHR. The study sample included all adult admissions to any of the 3 UPHS hospitals during the study period. Admissions to short procedure, rehabilitation, and hospice units were excluded. The study received expedited approval and a HIPAA waiver from the University of Pennsylvania institutional review board.

Development of Predictive Model

The UPHS Center for Evidence‐based Practice[15, 16] performed a systematic review to identify factors associated with hospital readmission within 30 days of discharge. We then examined the data available from our hospital EHR at the time of admission for those factors identified in the review. Using different threshold values and look‐back periods, we developed and tested 30 candidate prediction models using these variables alone and in combination (Table 1). Prediction models were evaluated using 24 months of historical data between August 1, 2009 and August 1, 2011.


An automated readmission risk flag was then integrated into the EHR. Patients classified as being at high risk for readmission with the automated prediction model were flagged in the EHR on admission (Figure 1A). The flag can be double‐clicked to display a separate screen with information relevant to discharge planning including inpatient and emergency department (ED) visits in the prior 12 months, as well as information about the primary team, length of stay, and admitting problem associated with those admissions (Figure 1B). The prediction model was integrated into our EHR using Arden Syntax for Medical Logic Modules.[17] The readmission risk screen involved presenting the provider with a new screen and was thus developed in Microsoft .NET using C# and Windows Forms (Microsoft Corp., Redmond, WA).

Figure 1

(A) Screenshot of the electronic health record (EHR) with the readmission risk flag implemented and visible in the ninth column of the patient list. (B) A new screen with patient‐specific information relevant to discharge planning can be accessed within the EHR by double‐clicking a patient's risk flag.

The flag was visible on the patient lists of all providers who utilized the EHR. This included but was not limited to nurses, social workers, unit pharmacists, and physicians. At the time of implementation, educational events regarding the readmission risk flag were provided in forums targeting administrators, pharmacists, social workers, and housestaff. Information about the flag and recommendations for use were distributed through emails and broadcast screensaver messages disseminated throughout the inpatient units of the health system. Providers were asked to pay special attention to discharge planning for patients triggering the readmission risk flag, including medication reconciliation by pharmacists for these patients prior to discharge, and arrangement of available home services by social workers.

The risk flag was 1 of 4 classes of interventions developed and endorsed by the health system in its efforts to reduce readmissions. Besides risk stratification, the other classes were: interdisciplinary rounding, patient education, and discharge communication. None of the interventions alone were expected to decrease readmissions, but as all 4 classes of interventions were implemented and performed routinely, the expectation was that they would work in concert to reduce readmissions.


The primary outcome was all‐cause hospital readmissions in the healthcare system within 30 days of discharge. Although this outcome is commonly used both in the literature and as a quality metric, significant debate persists as to the appropriateness of this metric.[18] Many of the factors driving 30‐day readmissions may be dependent on factors outside of the discharging hospital's control and it has been argued that nearer‐term, nonelective readmission rates may provide a more meaningful quality metric.[18] Seven‐day unplanned readmissions were thus used as a secondary outcome measure for this study.

Sensitivity, specificity, predictive value, C statistic, F score (the harmonic mean of positive predictive value and sensitivity),[19] and screen‐positive rate were calculated for each of the 30 prediction models evaluated using the historical data. The prediction model with the best balance of F score and screen‐positive rate was selected as the prediction model to be integrated into the EHR. Prospective validation of the selected prediction model was performed using the 12‐month period following implementation of the risk flag (September 2011September 2012).

To assess the impact of the automated prediction model on monthly readmission rate, we used the 24‐month period immediately before and the 12‐month period immediately after implementation of the readmission risk flag. Segmented regression analysis was performed testing for changes in level and slope of readmission rates between preimplementation and postimplementation time periods. This quasiexperimental interrupted time series methodology[20] allows us to control for secular trends in readmission rates and to assess the preimplementation trend (secular trend), the difference in rates immediately before and after the implementation (immediate effect), and the postimplementation change over time (sustained effect). We used Cochrane‐Orcutt estimation[21] to correct for serial autocorrelation.

All analyses were performed using Stata 12.1 software (Stata Corp, College Station, TX).


Predictors of Readmission

Our systematic review of the literature identified several patient and healthcare utilization patterns predictive of 30‐day readmission risk. Utilization factors included length of stay, number of prior admissions, previous 30‐day readmissions, and previous ED visits. Patient characteristics included number of comorbidities, living alone, and payor. Evidence was inconsistent regarding threshold values for these variables.

Many variables readily available in our EHR were either found by the systematic review not to be reliably predictive of 30‐day readmission (including age and gender) or were not readily or reliably available on admission (including length of stay and payor). At the time of implementation, our EHR did not include vital sign or nursing assessment variables, so these were not considered for inclusion in our model.

Of the available variables, 3 were consistently accurate and available in the EHR at the time of patient admission: prior hospital admission, emergency department visit, and 30‐day readmission within UPHS. We then developed 30 candidate prediction models using a combination of these variables, including 1 and 2 prior admissions, ED visits, and 30‐day readmissions in the 6 and 12 months preceding the index visit (Table 1).

Development and Validation

We used 24 months of retrospective data, which included 120,396 discharges with 17,337 thirty‐day readmissions (14.4% 30‐day all‐cause readmission rate) to test the candidate prediction models. A single risk factor, 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%) (Table 1).

Retrospective and Prospective Evaluation of Prediction Models for 30‐Day All‐Cause Readmissions
SensitivitySpecificityC StatisticPPVNPVScreen PositiveF Score
  • NOTE: Abbreviations: 30‐day, prior 30‐day readmission; Admit, inpatient hospital admission; ED, emergency room visit; NPV, negative predictive value; PPV, positive predictive value.

  • Optimum prediction model.

Retrospective Evaluation of Prediction Rules Lookback period: 6 months
Prior Admissions
Prior ED Visits
Prior 30‐day Readmissions
Combined Rules
Admit1 & ED122%92%0.56831%88%10%0.255
Admit2 & ED115%96%0.55640%87%5%0.217
Admit1 & 30‐day139%85%0.62331%89%18%0.346
Admit2 & 30‐day129%92%0.60337%89%11%0.324
30‐day1 & ED117%95%0.55937%87%6%0.229
30‐day1 & ED28%98%0.52740%86%3%0.132
Lookback period: 12 months
Prior Admission
Prior ED Visit
Prior 30‐day Readmission
Combined Rules
Admit1 & ED129%87%0.58027%88%15%0.281
Admit2 & ED122%93%0.57434%88%9%0.266
Admit1 & 30‐day142%84%0.63030%90%14%0.353
Admit2 & 30‐day134%89%0.61534%89%14%0.341
30‐day1 & ED121%93%0.56935%88%9%0.261
30‐day1 & ED213%96%0.54537%87%5%0.187
Prospective Evaluation of Prediction Rule
30‐Day All‐Cause39%84%0.61430%89%18%0.339

Prospective validation of the prediction model was performed using the 12‐month period directly following readmission risk flag implementation. During this period, the 30‐day all‐cause readmission rate was 15.1%. Sensitivity (39%), positive predictive value (30%), and proportion of patients flagged (18%) were consistent with the values derived from the retrospective data, supporting the reproducibility and predictive stability of the chosen risk prediction model (Table 1). The C statistic of the model was also consistent between the retrospective and prospective datasets (0.62 and 0.61, respectively).

Readmission Rates

The mean 30‐day all‐cause readmission rate for the 24‐month period prior to the intervention was 14.4%, whereas the mean for the 12‐month period after the implementation was 15.1%. Thirty‐day all‐cause and 7‐day unplanned monthly readmission rates do not appear to have been impacted by the intervention (Figure 2). There was no evidence for either an immediate or sustained effect (Table 2).

Figure 2

(A) Thirty‐day all‐cause readmission rates over time. (B) Seven‐day unplanned readmission rates over time.

Interrupted Time Series of Readmission Rates
HospitalPreimplementation PeriodImmediate EffectPostimplementation PeriodP Value Change in Trenda
Monthly % Change in Readmission RatesP ValueImmediate % ChangeP ValueMonthly % Change in Readmission RatesP Value
  • NOTE: Regression coefficients represent the absolute change in the monthly readmission rate (percentage) per unit time (month). Models are adjusted for autocorrelation using the Cochrane‐Orcutt estimator.

  • P value compares the pre‐ and postimplementation trends in readmission rates.

30‐Day All‐Cause Readmission Rates
Hosp A0.023Stable0.1530.4800.9910.100Increasing0.0440.134
Hosp B0.061Increasing0.0020.4920.1250.060Stable0.2960.048
Hosp C0.026Stable0.4130.4470.5850.046Stable0.6290.476
Health System0.032Increasing0.0140.3440.3020.026Stable0.4990.881
7‐Day Unplanned Readmission Rates
Hosp A0.004Stable0.6420.2710.4170.005Stable0.8910.967
Hosp B0.012Stable0.2010.2980.4890.038Stable0.4290.602
Hosp C0.008Stable0.2130.3530.2040.004Stable0.8950.899
Health System0.005Stable0.3580.0030.9900.010Stable0.7120.583


In this proof‐of‐concept study, we demonstrated the feasibility of an automated readmission risk prediction model integrated into a health system's EHR for a mixed population of hospitalized medical and surgical patients. To our knowledge, this is the first study in a general population of hospitalized patients to examine the impact of providing readmission risk assessment on readmission rates. We used a simple prediction model potentially generalizable to EHRs and healthcare populations beyond our own.

Existing risk prediction models for hospital readmission have important limitations and are difficult to implement in clinical practice.[22] Prediction models for hospital readmission are often dependent on retrospective claims data, developed for specific patient populations, and not designed for use early in the course of hospitalization when transitional care interventions can be initiated.[22] In addition, the time required to gather the necessary data and calculate the risk score remains a barrier to the adoption of prediction models in practice. By automating the process of readmission risk prediction, we were able to help integrate risk assessment into the healthcare process across many providers in a large multihospital healthcare organization. This has allowed us to consistently share risk assessment in real time with all members of the inpatient team, facilitating a team‐based approach to discharge planning.[23]

Two prior studies have developed readmission risk prediction models designed to be implemented into the EHR. Amarasingham et al.[24] developed and implemented[25] a heart failure‐specific prediction model based on the 18‐item Tabak mortality score.[26] Bradley et al.[27] studied in a broader population of medicine and surgery patients the predictive ability of a 26‐item score that utilized vital sign, cardiac rhythm, and nursing assessment data. Although EHRs are developing rapidly, currently the majority of EHRs do not support the use of many of the variables used in these models. In addition, both models were complex, raising concerns about generalizability to other healthcare settings and populations.

A distinctive characteristic of our model is its simplicity. We were cognizant of the realities of running a prediction model in a high‐volume production environment and the diminishing returns of adding more variables. We thus favored simplicity at all stages of model development, with the associated belief that complexity could be added with future iterations once feasibility had been established. Finally, we were aware that we were constructing a medical decision support tool rather than a simple classifier.[26] As such, the optimal model was not purely driven by discriminative ability, but also by our subjective assessment of the optimal trade‐off between sensitivity and specificity (the test‐treatment threshold) for such a model.[26] To facilitate model assessment, we thus categorized the potential predictor variables and evaluated the test characteristics of each combination of categorized variables. Although the C statistic of a model using continuous variables will be higher than a model using categorical values, model performance at the chosen trade‐off point is unlikely to be different.

Although the overall predictive ability of our model was fair, we found that it was associated with clinically meaningful differences in readmission rates between those triggering and not triggering the flag. The 30‐day all‐cause readmission rate in the 12‐month prospective sample was 15.1%, yet among those flagged as being at high risk for readmission the readmission rate was 30.4%. Given resource constraints and the need to selectively apply potentially costly care transition interventions, this may in practice translate into a meaningful discriminative ability.

Readmission rates did not change significantly during the study period. A number of plausible reasons for this exist, including: (1) the current model may not exhibit sufficient predictive ability to classify those at high risk or impact the behavior of providers appropriately, (2) those patients classified as high risk of readmission may not be at high risk of readmissions that are preventable, (3) information provided by the model may not yet routinely be used such that it can affect care, or (4) providing readmission risk assessment alone is not sufficient to influence readmission rates, and the other interventions or organizational changes necessary to impact care of those defined as high risk have not yet been implemented or are not yet being performed routinely. If the primary reasons for our results are those outlined in numbers 3 or 4, then readmission rates should improve over time as the risk flag becomes more routinely used, and those interventions necessary to impact readmission rates of those defined as high risk are implemented and performed.


There are several limitations of this intervention. First, the prediction model was developed using 30‐day all‐cause readmissions, rather than attempting to identify potentially preventable readmissions. Thirty‐day readmission rates may not be a good proxy for preventable readmissions,[18] and as a consequence, the ability to predict 30‐day readmissions may not ensure that a prediction model is able to predict preventable readmissions. Nonetheless, 30‐day readmission rates remain the most commonly used quality metric.

Second, the impact of the risk flag on provider behavior is uncertain. We did not formally assess how the readmission risk flag was used by healthcare team members. Informal assessment has, however, revealed that the readmission risk flag is gradually being adopted by different members of the care team including unit‐based pharmacists who are using the flag to prioritize the delivery of medication education, social workers who are using the flag to prompt providers to consider higher level services for patients at high risk of readmission, and patient navigators who are using the flag to prioritize follow‐up phone calls. As a result, we hope that the flag will ultimately improve the processes of care for high‐risk patients.

Third, we did not capture readmissions to hospitals outside of our healthcare system and have therefore underestimated the readmission rate in our population. However, our assessment of the effect of the risk flag on readmissions focused on relative readmission rates over time, and the use of the interrupted time series methodology should protect against secular changes in outside hospital readmission rates that were not associated with the intervention.

Fourth, it is possible that the prediction model implemented could be significantly improved by including additional variables or data available during the hospital stay. However, simple classification models using a single variable have repeatedly been shown to have the ability to compete favorably with state‐of‐the‐art multivariable classification models.[28]

Fifth, our study was limited to a single academic health system, and our experience may not be generalizable to smaller healthcare systems with limited EHR systems. However, the simplicity of our prediction model and the integration into a commercial EHR may improve the generalizability of our experience to other healthcare settings. Additionally, partly due to recent policy initiatives, the adoption of integrated EHR systems by hospitals is expected to continue at a rapid rate and become the standard of care within the near future.[29]


An automated prediction model was effectively integrated into an existing EHR and was able to identify patients on admission who are at risk for readmission within 30 days of discharge. Future work will aim to further examine the impact of the flag on readmission rates, further refine the prediction model, and gather data on how providers and care teams use the information provided by the flag.


Dr. Umscheid‐s contribution to this project was supported in part by the National Center for Research Resources, Grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, Grant UL1TR000003. The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.


Online-Only Materials

Microsoft Office document icon Supplementary Information (1)71 KB
   Comments ()