Original Research

Decrease in Inpatient Telemetry Utilization Through a System-Wide Electronic Health Record Change and a Multifaceted Hospitalist Intervention


BACKGROUND: Unnecessary telemetry monitoring contributes to healthcare waste.

OBJECTIVE: To evaluate the impact of 2 interventions to reduce telemetry utilization.

DESIGN, SETTING, AND PATIENTS: A 2-group retrospective, observational pre- to postintervention study of 35,871 nonintensive care unit (ICU) patients admitted to 1 academic medical center.

INTERVENTION: On the hospitalist service, we implemented a telemetry reduction intervention including education, process change, routine feedback, and a financial incentive between January 2015 and June 2015. In July 2015, a system-wide change to the telemetry ordering process was introduced.

MEASUREMENTS: The primary outcome was telemetry utilization, measured as the percentage of daily room charges for telemetry. Secondary outcomes were mortality, escalation of care, code event rate, and appropriateness of telemetry utilization. Generalized linear models were used to evaluate changes in outcomes while adjusting for patient factors.

RESULTS: Among hospitalist service patients, telemetry utilization was reduced by 69% (95% confidence interval [CI], −72% to −64%; P < .001), whereas on other services the reduction was a less marked 22% (95% CI, −27% to −16%; P < .001). There were no significant increases in mortality, code event rates, or care escalation, and there was a trend toward improved utilization appropriateness.

CONCLUSION: Although electronic telemetry ordering changes can produce decreases in hospital-wide telemetry monitoring, a multifaceted intervention may lead to an even larger decline in utilization rates. Whether these changes are durable cannot be ascertained from our study.

© 2018 Society of Hospital Medicine

Wasteful care may account for between 21% and 34% of the United States’ $3.2 trillion in annual healthcare expenditures, making it a prime target for cost-saving initiatives.1,2 Telemetry is a target for value improvement strategies because telemetry is overutilized, rarely leads to a change in management, and has associated guidelines on appropriate use.3-10 Telemetry use has been a focus of the Joint Commission’s National Patient Safety Goals since 2014, and it is also a focus of the Society of Hospital Medicine’s Choosing Wisely® campaign.11-13

Previous initiatives have evaluated how changes to telemetry orders or education and feedback affect telemetry use. Few studies have compared a system-wide electronic health record (EHR) approach to a multifaceted intervention. In seeking to address this gap, we adapted published guidelines from the American Heart Association (AHA) and incorporated them into our EHR ordering process.3 Simultaneously, we implemented a multifaceted quality improvement initiative and compared this combined program’s effectiveness to that of the EHR approach alone.


Study Design, Setting, and Population

We performed a 2-group observational pre- to postintervention study at University of Utah Health. Hospital encounters of patients 18 years and older who had at least 1 inpatient acute care, nonintensive care unit (ICU) room charge and an admission date between January 1, 2014, and July 31, 2016, were included. Patient encounters with missing encounter-level covariates, such as case mix index (CMI) or attending provider identification, were excluded. The Institutional Review Board classified this project as quality improvement and did not require review and oversight.


On July 6, 2015, our Epic (Epic Systems Corporation, Madison, WI) EHR telemetry order was modified to discourage unnecessary telemetry monitoring. The new order required providers ordering telemetry to choose a clinical indication and select a duration for monitoring, after which the order would expire and require physician renewal or discontinuation. These were the only changes that occurred for nonhospitalist providers. The nonhospitalist group included all admitting providers who were not hospitalists. This group included neurology (6.98%); cardiology (8.13%); other medical specialties such as pulmonology, hematology, and oncology (21.30%); cardiothoracic surgery (3.72%); orthopedic surgery (14.84%); general surgery (11.11%); neurosurgery (11.07%); and other surgical specialties, including urology, transplant, vascular surgery, and plastics (16.68%).

Between January 2015 and June 2015, we implemented a multicomponent program among our hospitalist service. The hospitalist service is composed of 4 teams with internal medicine residents and 2 teams with advanced practice providers, all staffed by academic hospitalists. Our program was composed of 5 elements, all of which were made before the hospital-wide changes to electronic telemetry orders and maintained throughout the study period, as follows: (1) a single provider education session reviewing available evidence (eg, AHA guidelines, Choosing Wisely® campaign), (2) removal of the telemetry order from hospitalist admission order set on March 23, 2015, (3) inclusion of telemetry discussion in the hospitalist group’s daily “Rounding Checklist,”14 (4) monthly feedback provided as part of hospitalist group meetings, and (5) a financial incentive, awarded to the division (no individual provider payment) if performance targets were met. See supplementary Appendix (“Implementation Manual”) for further details.

Data Source

We obtained data on patient age, gender, Medicare Severity-Diagnosis Related Group, Charlson comorbidity index (CCI), CMI, admitting unit, attending physician, admission and discharge dates, length of stay (LOS), 30-day readmission, bed charge (telemetry or nontelemetry), ICU stay, and inpatient mortality from the enterprise data warehouse. Telemetry days were determined through room billing charges, which are assigned based on the presence or absence of an active telemetry order at midnight. Code events came from a log kept by the hospital telephone operator, who is responsible for sending out all calls to the code team. Code event data were available starting July 19, 2014.


Our primary outcome was the percentage of hospital days that had telemetry charges for individual patients. All billed telemetry days on acute care floors were included regardless of admission status (inpatient vs observation), service, indication, or ordering provider. Secondary outcomes were inpatient mortality, escalation of care, code event rates, and appropriate telemetry utilization rates. Escalation of care was defined as transfer to an ICU after initially being admitted to an acute care floor. The code event rate was defined as the ratio of the number of code team activations to the number of patient days. Appropriate telemetry utilization rates were determined via chart review, as detailed below.

In order to evaluate changes in appropriateness of telemetry monitoring, 4 of the authors who are internal medicine physicians (KE, CC, JC, DG) performed chart reviews of 25 randomly selected patients in each group (hospitalist and nonhospitalist) before and after the intervention who received at least 1 day of telemetry monitoring. Each reviewer was provided a key based on AHA guidelines for monitoring indications and associated maximum allowable durations.3 Chart reviews were performed to determine the indication (if any) for monitoring, as well as the number of days that were indicated. The number of indicated days was compared to the number of telemetry days the patient received to determine the overall proportion of days that were indicated (“Telemetry appropriateness per visit”). Three reviewers (KE, AR, CC) also evaluated 100 patients on the hospitalist service after the intervention who did not receive any telemetry monitoring to evaluate whether patients with indications for telemetry monitoring were not receiving it after the intervention. For patients who had a possible indication, the indication was classified as Class I (“Cardiac monitoring is indicated in most, if not all, patients in this group”) or Class II (“Cardiac monitoring may be of benefit in some patients but is not considered essential for all patients”).3

Adjustment Variables

To account for differences in patient characteristics between hospitalist and nonhospitalist groups, we included age, gender, CMI, and CCI in statistical models. CCI was calculated according to the algorithm specified by Quan et al.15 using all patient diagnoses from previous visits and the index visit identified from the facility billing system.

Statistical Analysis

The period between January 1, 2014, and December 31, 2014, was considered preintervention, and August 1, 2015, to July 31, 2016, was considered postintervention. January 1, 2015, to July 31, 2015, was considered a “run-in” period because it was the interval during which the interventions on the hospitalist service were being rolled out. Data from this period were not included in the pre- or postintervention analyses but are shown in Figure 1.

We computed descriptive statistics for study outcomes and visit characteristics for hospitalist and nonhospitalist visits for pre- and postintervention periods. Descriptive statistics were expressed as n (%) for categorical patient characteristics and outcome variables. For continuous patient characteristics, we expressed the variability of individual observations as the mean ± the standard deviation. For continuous outcomes, we expressed the precision of the mean estimates using standard error. Telemetry utilization per visit was weighted by the number of total acute care days per visit. Telemetry appropriateness per visit was weighted by the number of telemetry days per visit. Patients who did not receive any telemetry monitoring were included in the analysis and noted to have 0 telemetry days. All patients had at least 1 acute care day. Categorical variables were compared using χ2 tests, and continuous variables were compared using t tests. Code event rates were compared using the binomial probability mid-p exact test for person-time data.16

We fitted generalized linear regression models using generalized estimating equations to evaluate the relative change in outcomes of interest in the postintervention period compared with the preintervention period after adjusting for study covariates. The models included study group (hospitalist and nonhospitalist), time period (pre- and postintervention), an interaction term between study group and time period, and study covariates (age, gender, CMI, and CCI). The models were defined using a binomial distributional assumption and logit link function for mortality, escalation of care, and whether patients had at least 1 telemetry day. A gamma distributional assumption and log link function were used for LOS, telemetry acute care days per visit, and total acute care days per visit. A negative binomial distributional assumption and log link function were used for telemetry utilization and telemetry appropriateness. We used the log of the acute care days as an offset for telemetry utilization and the log of the telemetry days per visit as an offset for telemetry appropriateness. An exchangeable working correlation matrix was used to account for physician-level clustering for all outcomes. Intervention effects, representing the difference in odds for categorical variables and in amount for continuous variables, were calculated as exponentiation of the beta parameters for the covariate minus 1.

P values <.05 were considered significant. We used SAS version 9.4 statistical software (SAS Institute Inc., Cary, NC) for data analysis.


There were 46,215 visits originally included in the study. Ninety-two visits (0.2%) were excluded due to missing or invalid data. A total of 10,344 visits occurred during the “run-in” period between January 1, 2015, and July 31, 2015, leaving 35,871 patient visits during the pre- and postintervention periods. In the hospitalist group, there were 3442 visits before the intervention and 3700 after. There were 13,470 visits in the nonhospitalist group before the intervention and 15,259 after.

The percent of patients who had any telemetry charges decreased from 36.2% to 15.9% (P < .001) in the hospitalist group and from 31.8% to 28.0% in the nonhospitalist group (P < .001; Table 1). Rates of code events did not change over time (P = .9).

Estimates from adjusted and unadjusted linear models are shown in Table 2. In adjusted models, telemetry utilization in the postintervention period was reduced by 69% (95% confidence interval [CI], −72% to −64%; P < .001) in the hospitalist group and by 22% (95% CI, −27% to −16%; P <.001) in the nonhospitalist group. Compared with nonhospitalists, hospitalists had a 60% greater reduction in telemetry rates (95% CI, −65% to −54%; P < .001).

In the randomly selected sample of patients pre- and postintervention who received telemetry monitoring, there was an increase in telemetry appropriateness on the hospitalist service (46% to 72%, P = .025; Table 1). In the nonhospitalist group, appropriate telemetry utilization did not change significantly. Of the 100 randomly selected patients in the hospitalist group after the intervention who did not receive telemetry, no patient had an AHA Class I indication, and only 4 patients had a Class II indication.3,17


In this study, implementing a change in the EHR telemetry order produced reductions in telemetry days. However, when combined with a multicomponent program including education, audit and feedback, financial incentives, and changes to remove telemetry orders from admission orders sets, an even more marked improvement was seen. Neither intervention reduced LOS, increased code event rates, or increased rates of escalation of care.

Prior studies have evaluated interventions to reduce unnecessary telemetry monitoring with varying degrees of success. The most successful EHR intervention to date, from Dressler et al.,18 achieved a 70% reduction in overall telemetry use by integrating the AHA guidelines into their EHR and incorporating nursing discontinuation guidelines to ensure that telemetry discontinuation was both safe and timely. Other studies using stewardship approaches and standardized protocols have been less successful.19,20 One study utilizing a multidisciplinary approach but not including an EHR component showed modest improvements in telemetry.21

Although we are unable to differentiate the exact effect of each component of the intervention, we did note an immediate decrease in telemetry orders after removing the telemetry order from our admission order set, a trend that was magnified after the addition of broader EHR changes (Figure 1). Important additional contributors to our success seem to have been the standardization of rounds to include daily discussion of telemetry and the provision of routine feedback. We cannot discern whether other components of our program (such as the financial incentives) contributed more or less to our program, though the sum of these interventions produced an overall program that required substantial buy in and sustained focus from the hospitalist group. The importance of the hospitalist program is highlighted by the relatively large differences in improvement compared with the nonhospitalist group.

Our study has several limitations. First, the study was conducted at a single center, which may limit its generalizability. Second, the intervention was multifaceted, diminishing our ability to discern which aspects beyond the system-wide change in the telemetry order were most responsible for the observed effect among hospitalists. Third, we are unable to fully account for baseline differences in telemetry utilization between hospitalist and nonhospitalist groups. It is likely that different services utilize telemetry monitoring in different ways, and the hospitalist group may have been more aware of the existing guidelines for monitoring prior to the intervention. Furthermore, we had a limited sample size for the chart audits, which reduced the available statistical power for determining changes in the appropriateness of telemetry utilization. Additionally, because internal medicine residents rotate through various services, it is possible that the education they received on their hospitalist rotation as part of our intervention had a spillover effect in the nonhospitalist group. However, any effect should have decreased the difference between the groups. Lastly, although our postintervention time period was 1 year, we do not have data beyond that to monitor for sustainability of the results.


In this single-site study, combining EHR orders prompting physicians to choose a clinical indication and duration for monitoring with a broader program—including upstream changes in ordering as well as education, audit, and feedback—produced reductions in telemetry usage. Whether this reduction improves the appropriateness of telemetry utilization or reduces other effects of telemetry (eg, alert fatigue, calls for benign arrhythmias) cannot be discerned from our study. However, our results support the idea that multipronged approaches to telemetry use are most likely to produce improvements.


The authors thank Dr. Frank Thomas for his assistance with process engineering and Mr. Andrew Wood for his routine provision of data. The statistical analysis was supported by the University of Utah Study Design and Biostatistics Center, with funding in part from the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through Grant 5UL1TR001067-05 (formerly 8UL1TR000105 and UL1RR025764).


The authors have no conflicts of interest to report.


Online-Only Materials

   Comments ()

Related Articles from PubMed

J Hosp Med. 2015 Sep;10(9):627-32. doi: 10.1002/jhm.2411. Epub 2015 Jul 7.
J Hosp Med. 2015 Jun;10(6):390-5. doi: 10.1002/jhm.2354. Epub 2015 Mar 21.