Clinical alarm safety has become a recent target for improvement in many hospitals. In 2013, The Joint Commission released a National Patient Safety Goal prompting accredited hospitals to establish alarm safety as a hospital priority, identify the most important alarm signals to manage, and, by 2016, develop policies and procedures that address alarm management.[1] In addition, the Emergency Care Research Institute has named alarm hazards the top health technology hazard each year since 2012.[2] The primary arguments supporting the elevation of alarm management to a national hospital priority in the United States include the following: (1) clinicians rely on alarms to notify them of important physiologic changes, (2) alarms occur frequently and usually do not warrant clinical intervention, and (3) alarm overload renders clinicians unable to respond to all alarms, resulting in alarm fatigue: responding more slowly or ignoring alarms that may represent actual clinical deterioration.[3, 4] These arguments are built largely on anecdotal data, reported safety event databases, and small studies that have not previously been systematically analyzed. Despite the national focus on alarms, we still know very little about fundamental questions key to improving alarm safety. In this systematic review, we aimed to answer 3 key questions about physiologic monitor alarms: (1) What proportion of alarms warrant attention or clinical intervention (ie, actionable alarms), and how does this proportion vary between adult and pediatric populations and between intensive care unit (ICU) and ward settings? (2) What is the relationship between alarm exposure and clinician response time? (3) What interventions are effective in reducing the frequency of alarms? We limited our scope to monitor alarms because few studies have evaluated the characteristics of alarms from other medical devices, and because missing relevant monitor alarms could adversely impact patient safety. We performed a systematic review of the literature in accordance with the Meta‐Analysis of Observational Studies in Epidemiology guidelines[5] and developed this manuscript using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement.[6] With help from an experienced biomedical librarian (C.D.S.), we searched PubMed, the Cumulative Index to Nursing and Allied Health Literature, Scopus, Cochrane Library, ClinicalTrials.gov, and Google Scholar from January 1980 through April 2015 (see Supporting Information in the online version of this article for the search terms and queries). We hand searched the reference lists of included articles and reviewed our personal libraries to identify additional relevant studies. We included peer‐reviewed, original research studies published in English, Spanish, or French that addressed the questions outlined above. Eligible patient populations were children and adults admitted to hospital inpatient units and emergency departments (EDs). We excluded alarms in procedural suites or operating rooms (typically responded to by anesthesiologists already with the patient) because of the differences in environment of care, staff‐to‐patient ratio, and equipment. We included observational studies reporting the actionability of physiologic monitor alarms (ie, alarms warranting special attention or clinical intervention), as well as nurse responses to these alarms. We excluded studies focused on the effects of alarms unrelated to patient safety, such as families' and patients' stress, noise, or sleep disturbance. We included only intervention studies evaluating pragmatic interventions ready for clinical implementation (ie, not experimental devices or software algorithms). First, 2 authors screened the titles and abstracts of articles for eligibility. To maximize sensitivity, if at least 1 author considered the article relevant, the article proceeded to full‐text review. Second, the full texts of articles screened were independently reviewed by 2 authors in an unblinded fashion to determine their eligibility. Any disagreements concerning eligibility were resolved by team consensus. To assure consistency in eligibility determinations across the team, a core group of the authors (C.W.P, C.P.B., E.E., and V.V.G.) held a series of meetings to review and discuss each potentially eligible article and reach consensus on the final list of included articles. Two authors independently extracted the following characteristics from included studies: alarm review methods, analytic design, fidelity measurement, consideration of unintended adverse safety consequences, and key results. Reviewers were not blinded to journal, authors, or affiliations. Given the high degree of heterogeneity in methodology, we were unable to generate summary proportions of the observational studies or perform a meta‐analysis of the intervention studies. Thus, we organized the studies into clinically relevant categories and presented key aspects in tables. Due to the heterogeneity of the studies and the controversy surrounding quality scores,[5] we did not generate summary scores of study quality. Instead, we evaluated and reported key design elements that had the potential to bias the results. To recognize the more comprehensive studies in the field, we developed by consensus a set of characteristics that distinguished studies with lower risk of bias. These characteristics are shown and defined in Table 1. NOTE: Lower risk of bias for observational studies required all of the following characteristics be reported: (1) two independent reviewers of alarms, (2) at least 1 clinical expert reviewer (ie, physician or nurse), (3) the reviewer was not simultaneously involved in clinical care, and (4) there was a clear definition of alarm actionability provided in the article. These indicators assess detection bias, observer bias, analytical bias, and reporting bias and were derived from the Meta‐analysis of Observational Studies in Epidemiology checklist.[5] Lower risk of bias for intervention studies required all of the following characteristics be reported: (1) patient census accounted for in analysis, (2) formal statistical testing of effect or statistical process control methods used to evaluate effect, (3) intervention fidelity measured and reported (defined broadly as some measurement of whether the intervention was delivered as intended), and (4) relevant patient safety outcomes measured and reported (eg, the rate of code blue events before and after the intervention was implemented). These indicators assess reporting bias and internal validity bias and were derived from the Downs and Black checklist.[42] Monitor system: alarm data were electronically collected directly from the physiologic monitors and saved on a computer device through software such as BedMasterEx. Direct observation: an in‐person observer, such as a research assistant or a nurse, takes note of the alarm data and/or responses to alarms. Medical record review: data on alarms and/or responses to alarms were extracted from the patient medical records. Rhythm annotation: data on waveforms from cardiac monitors were collected and saved on a computer device through software such as BedMasterEx. Video observation: video cameras were set up in the patient's room and recorded data on alarms and/or responses to alarms. Remote monitor staff: clinicians situated at a remote location observe the patient via video camera and may be able to communicate with the patient or the patient's assigned nurse. Abbreviations: QI, quality improvement; RN, registered nurse; SPC, statistical process control. *Monitor system + RN interrogation. Assigned nurse making observations. Monitor from central station. Alarm outcome reported using run chart, and fidelity outcomes presented using statistical process control charts. For the purposes of this review, we defined nonactionable alarms as including both invalid (false) alarms that do not that accurately represent the physiologic status of the patient and alarms that are valid but do not warrant special attention or clinical intervention (nuisance alarms). We did not separate out invalid alarms due to the tremendous variation between studies in how validity was measured. Search results produced 4629 articles (see the flow diagram in the Supporting Information in the online version of this article), of which 32 articles were eligible: 24 observational studies describing alarm characteristics and 8 studies describing interventions to reduce alarm frequency. Characteristics of included studies are shown in Table 1. Of the 24 observational studies,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] 15 included adult patients,[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] 7 included pediatric patients,[22, 23, 24, 25, 26, 27, 28] and 2 included both adult and pediatric patients.[29, 30] All were single‐hospital studies, except for 1 study by Chambrin and colleagues[10] that included 5 sites. The number of patient‐hours examined in each study ranged from 60 to 113,880.[7, 8, 9, 10, 11, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30] Hospital settings included ICUs (n = 16),[9, 10, 11, 13, 14, 16, 17, 18, 19, 22, 23, 24, 25, 26, 27, 29] general wards (n = 5),[12, 15, 20, 22, 28] EDs (n = 2),[7, 21] postanesthesia care unit (PACU) (n = 1),[30] and cardiac care unit (CCU) (n = 1).[8] Studies varied in the type of physiologic signals recorded and data collection methods, ranging from direct observation by a nurse who was simultaneously caring for patients[29] to video recording with expert review.[14, 19, 22] Four observational studies met the criteria for lower risk of bias.[11, 14, 15, 22] Of the 8 intervention studies, 7 included adult patients,[31, 32, 33, 34, 35, 36, 37] and 1 included pediatric patients.[38] All were single‐hospital studies; 6 were quasi‐experimental[31, 33, 34, 35, 37, 38] and 2 were experimental.[32, 36] Settings included progressive care units (n = 3),[33, 34, 35] CCUs (n = 3),[32, 33, 37] wards (n = 2),[31, 38] PACU (n = 1),[36] and a step‐down unit (n = 1).[32] All except 1 study[32] used the monitoring system to record alarm data. Several studies evaluated multicomponent interventions that included combinations of the following: widening alarm parameters,[31, 35, 36, 37, 38] instituting alarm delays,[31, 34, 36, 38] reconfiguring alarm acuity,[35, 37] use of secondary notifications,[34] daily change of electrocardiographic electrodes or use of disposable electrocardiographic wires,[32, 33, 38] universal monitoring in high‐risk populations,[31] and timely discontinuation of monitoring in low‐risk populations.[38] Four intervention studies met our prespecified lower risk of bias criteria.[31, 32, 36, 38] Results of the observational studies are provided in Table 2. The proportion of alarms that were actionable was <1% to 26% in adult ICU settings,[9, 10, 11, 13, 14, 16, 17, 19] 20% to 36% in adult ward settings,[12, 15, 20] 17% in a mixed adult and pediatric PACU setting,[30] 3% to 13% in pediatric ICU settings,[22, 23, 24, 25, 26] and 1% in a pediatric ward setting.[22] NOTE: Lower risk of bias for observational studies required all of the following characteristics be reported: (1) two independent reviewers of alarms, (2) at least 1 clinical expert reviewer (i.e. physician or nurse), (3) the reviewer was not simultaneously involved in clinical care, and (4) there was a clear definition of alarm actionability provided in the article. Abbreviations: CCU, cardiac/telemetry care unit; ECG, electrocardiogram; ED, emergency department; ICU, intensive care unit; PACU, postanesthesia care unit; SpO2, oxygen saturation; VT, ventricular tachycardia. Includes respiratory rate measured via ECG leads. Actionable is defined as alarms warranting special attention or clinical intervention. Valid is defined as the alarm accurately representing the physiologic status of the patient. Directly addresses relationship between alarm exposure and response time. ∥Not provided directly; estimated from description of data collection methods. Whereas 9 studies addressed response time,[8, 12, 17, 18, 20, 21, 22, 27, 28] only 2 evaluated the relationship between alarm burden and nurse response time.[20, 22] Voepel‐Lewis and colleagues found that nurse responses were slower to patients with the highest quartile of alarms (57.6 seconds) compared to those with the lowest (45.4 seconds) or medium (42.3 seconds) quartiles of alarms on an adult ward (P = 0.046). They did not find an association between false alarm exposure and response time.[20] Bonafide and colleagues found incremental increases in response time as the number of nonactionable alarms in the preceding 120 minutes increased (P < 0.001 in the pediatric ICU, P = 0.009 on the pediatric ward).[22] Results of the 8 intervention studies are provided in Table 3. Three studies evaluated single interventions;[32, 33, 36] the remainder of the studies tested interventions with multiple components such that it was impossible to separate the effect of each component. Below, we have summarized study results, arranged by component. Because only 1 study focused on pediatric patients,[38] results from pediatric and adult settings are combined. NOTE: Lower risk of bias for intervention studies required all of the following characteristics be reported: (1) patient census accounted for in analysis, (2) formal statistical testing of effect or statistical process control methods used to evaluate effect, (3) intervention fidelity measured and reported (defined broadly as some measurement of whether the intervention was delivered as intended), and (4) relevant patient safety outcomes measured and reported (eg, the rate of code blue events before and after the intervention was implemented). Abbreviations: CCU, cardiac/telemetry care unit; ECG, electrocardiogram; ICU, intensive care unit; ITS, interrupted time series; PACU, postanesthesia care unit; PCU, progressive care unit; SpO2, oxygen saturation. *Delays were part of secondary notification system only. Delays explored retrospectively only; not part of prospective evaluation. Preimplementation count not reported. Widening alarm parameter default settings was evaluated in 5 studies:[31, 35, 36, 37, 38] 1 single intervention randomized controlled trial (RCT),[36] and 4 multiple‐intervention, quasi‐experimental studies.[31, 35, 37, 38] In the RCT, using a lower SpO2 limit of 85% instead of the standard 90% resulted in 61% fewer alarms. In the 4 multiple intervention studies, 1 study reported significant reductions in alarm rates (P < 0.001),[37] 1 study did not report preintervention alarm rates but reported a postintervention alarm rate of 4 alarms per patient‐day,[31] and 2 studies reported reductions in alarm rates but did not report any statistical testing.[35, 38] Of the 3 studies examining patient safety, 1 study with universal monitoring reported fewer rescue events and transfers to the ICU postimplementation,[31] 1 study reported no missed acute decompensations,[38] and 1 study (the RCT) reported significantly more true hypoxemia events (P = 0.001).[36] Alarm delays were evaluated in 4 studies:[31, 34, 36, 38] 3 multiple‐intervention, quasi‐experimental studies[31, 34, 38] and 1 retrospective analysis of data from an RCT.[36] One study combined alarm delays with widening defaults in a universal monitoring strategy and reported a postintervention alarm rate of 4 alarms per patient.[31] Another study evaluated delays as part of a secondary notification pager system and found a negatively sloping regression line that suggested a decreasing alarm rate, but did not report statistical testing.[34] The third study reported a reduction in alarm rates but did not report statistical testing.[38] The RCT compared the impact of a hypothetical 15‐second alarm delay to that of a lower SpO2 limit reduction and reported a similar reduction in alarms.[36] Of the 4 studies examining patient safety, 1 study with universal monitoring reported improvements,[31] 2 studies reported no adverse outcomes,[35, 38] and the retrospective analysis of data from the RCT reported the theoretical adverse outcome of delayed detection of sudden, severe desaturations.[36] Reconfiguring alarm acuity was evaluated in 2 studies, both of which were multiple‐intervention quasi‐experimental studies.[35, 37] Both showed reductions in alarm rates: 1 was significant without increasing adverse events (P < 0.001),[37] and the other did not report statistical testing or safety outcomes.[35] Secondary notification of nurses using pagers was the main intervention component of 1 study incorporating delays between the alarms and the alarm pages.[34] As mentioned above, a negatively sloping regression line was displayed, but no statistical testing or safety outcomes were reported. Disposable electrocardiographic lead wires or daily electrode changes were evaluated in 3 studies:[32, 33, 38] 1 single intervention cluster‐randomized trial[32] and 2 quasi‐experimental studies.[33, 38] In the cluster‐randomized trial, disposable lead wires were compared to reusable lead wires, with disposable lead wires having significantly fewer technical alarms for lead signal failures (P = 0.03) but a similar number of monitoring artifact alarms (P = 0.44).[32] In a single‐intervention, quasi‐experimental study, daily electrode change showed a reduction in alarms, but no statistical testing was reported.[33] One multiple‐intervention, quasi‐experimental study incorporating daily electrode change showed fewer alarms without statistical testing.[38] Of the 2 studies examining patient safety, both reported no adverse outcomes.[32, 38] This systematic review of physiologic monitor alarms in the hospital yielded the following main findings: (1) between 74% and 99% of physiologic monitor alarms were not actionable, (2) a significant relationship between alarm exposure and nurse response time was demonstrated in 2 small observational studies, and (3) although interventions were most often studied in combination, results from the studies with lower risk of bias suggest that widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and/or changing electrodes daily are the most promising interventions for reducing alarms. Only 5 of 8 intervention studies measured intervention safety and found that widening alarm parameters and implementing alarm delays had mixed safety outcomes, whereas disposable electrocardiographic lead wires and daily electrode changes had no adverse safety outcomes.[29, 30, 34, 35, 36] Safety measures are crucial to ensuring the highest level of patient safety is met; interventions are rendered useless without ensuring actionable alarms are not disabled. The variation in results across studies likely reflects the wide range of care settings as well as differences in design and quality. This field is still in its infancy, with 18 of the 32 articles published in the past 5 years. We anticipate improvements in quality and rigor as the field matures, as well as clinically tested interventions that incorporate smart alarms. Smart alarms integrate data from multiple physiologic signals and the patient's history to better detect physiologic changes in the patient and improve the positive predictive value of alarms. Academicindustry partnerships will be required to implement and rigorously test smart alarms and other emerging technologies in the hospital. To our knowledge, this is the first systematic review focused on monitor alarms with specific review questions relevant to alarm fatigue. Cvach recently published an integrative review of alarm fatigue using research published through 2011.[39] Our review builds upon her work by contributing a more extensive and systematic search strategy with databases spanning nursing, medicine, and engineering, including additional languages, and including newer studies published through April 2015. In addition, we included multiple cross‐team checks in our eligibility review to ensure high sensitivity and specificity of the resulting set of studies. Although we focused on interventions aiming to reduce alarms, there has also been important recent work focused on reducing telemetry utilization in adult hospital populations as well as work focused on reducing pulse oximetry utilization in children admitted with respiratory conditions. Dressler and colleagues reported an immediate and sustained reduction in telemetry utilization in hospitalized adults upon redesign of cardiac telemetry order sets to include the clinical indication, which defaulted to the American Heart Association guideline‐recommended telemetry duration.[40] Instructions for bedside nurses were also included in the order set to facilitate appropriate telemetry discontinuation. Schondelmeyer and colleagues reported reductions in continuous pulse oximetry utilization in hospitalized children with asthma and bronchiolitis upon introduction of a multifaceted quality improvement program that included provider education, a nurse handoff checklist, and discontinuation criteria incorporated into order sets.[41] There are limitations to this systematic review and its underlying body of work. With respect to our approach to this systematic review, we focused only on monitor alarms. Numerous other medical devices generate alarms in the patient‐care environment that also can contribute to alarm fatigue and deserve equally rigorous evaluation. With respect to the underlying body of work, the quality of individual studies was generally low. For example, determinations of alarm actionability were often made by a single rater without evaluation of the reliability or validity of these determinations, and statistical testing was often missing. There were also limitations specific to intervention studies, including evaluation of nongeneralizable patient populations, failure to measure the fidelity of the interventions, inadequate measures of intervention safety, and failure to statistically evaluate alarm reductions. Finally, though not necessarily a limitation, several studies were conducted by authors involved in or funded by the medical device industry.[11, 15, 19, 31, 32] This has the potential to introduce bias, although we have no indication that the quality of the science was adversely impacted. Moving forward, the research agenda for physiologic monitor alarms should include the following: (1) more intensive focus on evaluating the relationship between alarm exposure and response time with analysis of important mediating factors that may promote or prevent alarm fatigue, (2) emphasis on studying interventions aimed at improving alarm management using rigorous designs such as cluster‐randomized trials and trials randomized by individual participant, (3) monitoring and reporting clinically meaningful balancing measures that represent unintended consequences of disabling or delaying potentially important alarms and possibly reducing the clinicians' ability to detect true patient deterioration and intervene in a timely manner, and (4) support for transparent academicindustry partnerships to evaluate new alarm technology in real‐world settings. As evidence‐based interventions emerge, there will be new opportunities to study different implementation strategies of these interventions to optimize effectiveness. The body of literature relevant to physiologic monitor alarm characteristics and alarm fatigue is limited but growing rapidly. Although we know that most alarms are not actionable and that there appears to be a relationship between alarm exposure and response time that could be caused by alarm fatigue, we cannot yet say with certainty that we know which interventions are most effective in safely reducing unnecessary alarms. Interventions that appear most promising and should be prioritized for intensive evaluation include widening alarm parameters, implementing alarm delays, and using disposable electrocardiographic lead wires and changing electrodes daily. Careful evaluation of these interventions must include systematically examining adverse patient safety consequences. The authors thank Amogh Karnik and Micheal Sellars for their technical assistance during the review and extraction process. Disclosures: Ms. Zander is supported by the Society of Hospital Medicine Student Hospitalist Scholar Grant. Dr. Bonafide and Ms. Stemler are supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number K23HL116427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no conflicts of interest.METHODS
Eligibility Criteria
Selection Process and Data Extraction
Synthesis of Results and Risk Assessment
First Author and Publication Year
Alarm Review Method
Indicators of Potential Bias for Observational Studies
Indicators of Potential Bias for Intervention Studies
Monitor System
Direct Observation
Medical Record Review
Rhythm Annotation
Video Observation
Remote Monitoring Staff
Medical Device Industry Involved
Two Independent Reviewers
At Least 1 Reviewer Is a Clinical Expert
Reviewer Not Simultaneously in Patient Care
Clear Definition of Alarm Actionability
Census Included
Statistical Testing or QI SPC Methods
Fidelity Assessed
Safety Assessed
Lower Risk of Bias
Adult Observational
Atzema 2006[7]
✓*
✓
✓
Billinghurst 2003[8]
✓
✓
✓
✓
Biot 2000[9]
✓
✓
✓
✓
Chambrin 1999[10]
✓
✓
✓
✓
Drew 2014[11]
✓
✓
✓
✓
✓
✓
✓
✓
Gazarian 2014[12]
✓
✓
✓
✓
✓
Grges 2009[13]
✓
✓
✓
✓
Gross 2011[15]
✓
✓
✓
✓
✓
✓
✓
Inokuchi 2013[14]
✓
✓
✓
✓
✓
✓
✓
Koski 1990[16]
✓
✓
✓
✓
Morales Snchez 2014[17]
✓
✓
✓
✓
Pergher 2014[18]
✓
✓
Siebig 2010[19]
✓
✓
✓
✓
✓
✓
Voepel‐Lewis 2013[20]
✓
✓
✓
✓
Way 2014[21]
✓
✓
✓
Pediatric Observational
Bonafide 2015[22]
✓
✓
✓
✓
✓
✓
✓
Lawless 1994[23]
✓
✓
Rosman 2013[24]
✓
✓
✓
✓
✓
Talley 2011[25]
✓
✓
✓
✓
✓
Tsien 1997[26]
✓
✓
✓
✓
van Pul 2015[27]
✓
Varpio 2012[28]
✓
✓
✓
✓
Mixed Adult and Pediatric Observational
O'Carroll 1986[29]
✓
Wiklund 1994[30]
✓
✓
✓
✓
Adult Intervention
Albert 2015[32]
✓
✓
✓
✓
✓
✓
✓
Cvach 2013[33]
✓
✓
Cvach 2014[34]
✓
✓
Graham 2010[35]
✓
Rheineck‐Leyssius 1997[36]
✓
✓
✓
✓
✓
✓
Taenzer 2010[31]
✓
✓
✓
✓
✓
✓
Whalen 2014[37]
✓
✓
✓
Pediatric Intervention
Dandoy 2014[38]
✓
✓
✓
✓
✓
✓
RESULTS
Study Selection
Observational Study Characteristics
Intervention Study Characteristics
Proportion of Alarms Considered Actionable
Signals Included
First Author and Publication Year
Setting
Monitored Patient‐Hours
SpO2
ECG Arrhythmia
ECG Parametersa
Blood Pressure
Total Alarms
Actionable Alarms
Alarm Response
Lower Risk of Bias
Adult
Atzema 2006[7]
ED
371
✓
1,762
0.20%
Billinghurst 2003[8]
CCU
420
✓
751
Not reported; 17% were valid
Nurses with higher acuity patients and smaller % of valid alarms had slower response rates
Biot 2000[9]
ICU
250
✓
✓
✓
✓
3,665
3%
Chambrin 1999[10]
ICU
1,971
✓
✓
✓
✓
3,188
26%
Drew 2014[11]
ICU
48,173
✓
✓
✓
✓
2,558,760
0.3% of 3,861 VT alarms
✓
Gazarian 2014[12]
Ward
54 nurse‐hours
✓
✓
✓
205
22%
Response to 47% of alarms
Grges 2009[13]
ICU
200
✓
✓
✓
✓
1,214
5%
Gross 2011[15]
Ward
530
✓
✓
✓
✓
4,393
20%
✓
Inokuchi 2013[14]
ICU
2,697
✓
✓
✓
✓
11,591
6%
✓
Koski 1990[16]
ICU
400
✓
✓
2,322
12%
Morales Snchez 2014[17]
ICU
434 sessions
✓
✓
✓
215
25%
Response to 93% of alarms, of which 50% were within 10 seconds
Pergher 2014[18]
ICU
60
✓
76
Not reported
72% of alarms stopped before nurse response or had >10 minutes response time
Siebig 2010[19]
ICU
982
✓
✓
✓
✓
5,934
15%
Voepel‐Lewis 2013[20]
Ward
1,616
✓
710
36%
Response time was longer for patients in highest quartile of total alarms
Way 2014[21]
ED
93
✓
✓
✓
✓
572
Not reported; 75% were valid
Nurses responded to more alarms in resuscitation room vs acute care area, but response time was longer
Pediatric
Bonafide 2015[22]
Ward + ICU
210
✓
✓
✓
✓
5,070
13% PICU, 1% ward
Incremental increases in response time as number of nonactionable alarms in preceding 120 minutes increased
✓
Lawless 1994[23]
ICU
928
✓
✓
✓
2,176
6%
Rosman 2013[24]
ICU
8,232
✓
✓
✓
✓
54,656
4% of rhythm alarms true critical"
Talley 2011[25]
ICU
1,470∥
✓
✓
✓
✓
2,245
3%
Tsien 1997[26]
ICU
298
✓
✓
✓
2,942
8%
van Pul 2015[27]
ICU
113,880∥
✓
✓
✓
✓
222,751
Not reported
Assigned nurse did not respond to 6% of alarms within 45 seconds
Varpio 2012[28]
Ward
49 unit‐hours
✓
✓
✓
✓
446
Not reported
70% of all alarms and 41% of crisis alarms were not responded to within 1 minute
Both
O'Carroll 1986[29]
ICU
2,258∥
✓
284
2%
Wiklund 1994[30]
PACU
207
✓
✓
✓
1,891
17%
Relationship Between Alarm Exposure and Response Time
Interventions Effective in Reducing Alarms
First Author and Publication Year
Design
Setting
Main Intervention Components
Other/ Comments
Key Results
Results Statistically Significant?
Lower Risk of Bias
Widen Default Settings
Alarm Delays
Reconfigure Alarm Acuity
Secondary Notification
ECG Changes
Adult
Albert 2015[32]
Experimental (cluster‐randomized)
CCU
✓
Disposable vs reusable wires
Disposable leads had 29% fewer no‐telemetry, leads‐fail, and leads‐off alarms and similar artifact alarms
✓
✓
Cvach 2013[33]
Quasi‐experimental (before and after)
CCU and PCU
✓
Daily change of electrodes
46% fewer alarms/bed/day
Cvach 2014[34]
Quasi‐experimental (ITS)
PCU
✓*
✓
Slope of regression line suggests decrease of 0.75 alarms/bed/day
Graham 2010[35]
Quasi‐experimental (before and after)
PCU
✓
✓
43% fewer crisis, warning, and system warning alarms on unit
Rheineck‐Leyssius 1997[36]
Experimental (RCT)
PACU
✓
✓
Alarm limit of 85% had fewer alarms/patient but higher incidence of true hypoxemia for >1 minute (6% vs 2%)
✓
✓
Taenzer 2010[31]
Quasi‐experimental (before and after with concurrent controls)
Ward
✓
✓
Universal SpO2 monitoring
Rescue events decreased from 3.4 to 1.2 per 1,000 discharges; transfers to ICU decreased from 5.6 to 2.9 per 1,000 patient‐days, only 4 alarms/patient‐day
✓
✓
Whalen 2014[37]
Quasi‐experimental (before and after)
CCU
✓
✓
89% fewer audible alarms on unit
✓
Pediatric
Dandoy 2014[38]
Quasi‐experimental (ITS)
Ward
✓
✓
✓
Timely monitor discontinuation; daily change of ECG electrodes
Decrease in alarms/patient‐days from 180 to 40
✓
DISCUSSION
Limitations of This Review and the Underlying Body of Work
CONCLUSIONS
Acknowledgements
Systematic Review of Physiologic Monitor Alarm Characteristics and Pragmatic Interventions to Reduce Alarm Frequency
corresponding author
Address for correspondence and reprint requests: Christopher P. Bonafide, MD, MSCE, The Children's Hospital of Philadelphia, 3401 Civic Center Blvd., Philadelphia, PA 19104; Telephone: 267‐426‐2901; E‐mail: [email protected]
Abstract
Alarm fatigue from frequent nonactionable physiologic monitor alarms is frequently named as a threat to patient safety. To critically examine the available literature relevant to alarm fatigue. Articles published in English, Spanish, or French between January 1980 and April 2015 indexed in PubMed, Cumulative Index to Nursing and Allied Health Literature, Scopus, Cochrane Library, Google Scholar, and ClinicalTrials.gov. Articles focused on hospital physiologic monitor alarms addressing any of the following: (1) the proportion of alarms that are actionable, (2) the relationship between alarm exposure and nurse response time, and (3) the effectiveness of interventions in reducing alarm frequency. We extracted data on setting, collection methods, proportion of alarms determined to be actionable, nurse response time, and associations between interventions and alarm rates. Our search produced 24 observational studies focused on alarm characteristics and response time and 8 studies evaluating interventions. Actionable alarm proportion ranged from <1% to 36% across a range of hospital settings. Two studies showed relationships between high alarm exposure and longer nurse response time. Most intervention studies included multiple components implemented simultaneously. Although studies varied widely, and many had high risk of bias, promising but still unproven interventions include widening alarm parameters, instituting alarm delays, and using disposable electrocardiographic wires or frequently changed electrocardiographic electrodes. Physiologic monitor alarms are commonly nonactionable, and evidence supporting the concept of alarm fatigue is emerging. Several interventions have the potential to reduce alarms safely, but more rigorously designed studies with attention to possible unintended consequences are needed. Journal of Hospital Medicine 2016;11:136–144. © 2015 Society of Hospital MedicineBACKGROUND
PURPOSE
DATA SOURCES
STUDY SELECTION
DATA EXTRACTION
DATA SYNTHESIS
CONCLUSIONS
© 2015 Society of Hospital Medicine
References
Online-Only Materials
Attachment | Size |
---|---|
![]() | 191.46 KB |
![]() | 35.38 KB |