Respiratory rate is the most accurate vital sign for predicting adverse outcomes in ward patients.1,2 Though other vital signs are typically collected by using machines, respiratory rate is collected manually by caregivers counting the breathing rate. However, studies have shown significant discrepancies between a patient’s respiratory rate documented in the medical record, which is often 18 or 20, and the value measured by counting the rate over a full minute.3 Thus, despite the high accuracy of respiratory rate, it is possible that these values do not represent true patient physiology. It is unknown whether a valid automated measurement of respiratory rate would be more predictive than a manually collected respiratory rate for identifying patients who develop deterioration. The aim of this study was to compare the distribution and predictive accuracy of manually and automatically recorded respiratory rates.
In this prospective cohort study, adult patients admitted to one oncology ward at the University of Chicago from April 2015 to May 2016 were approached for consent (Institutional Review Board #14-0682). Enrolled patients were fit with a cableless, FDA-approved respiratory pod device (Philips IntelliVue clResp Pod; Philips Healthcare, Andover, MA) that automatically recorded respiratory rate and heart rate every 15 minutes while they remained on the ward. Pod data were paired with vital sign data documented in the electronic health record (EHR) by taking the automated value closest, but prior to, the manual value up to a maximum of 4 hours. Automated and manual respiratory rate were compared by using the area under the receiver operating characteristic curve (AUC) for whether an intensive care unit (ICU) transfer occurred within 24 hours of each paired observation without accounting for patient-level clustering.
In this prospective cohort study, we found that manual respiratory rates were different than those collected from an automated system and, yet, were significantly more accurate for predicting ICU transfer. These results suggest that the predictive accuracy of respiratory rates documented in the EHR is due to more than just physiology. Our findings have important implications for the risk stratification of ward patients.
Though previous literature has suggested that respiratory rate is the most accurate predictor of deterioration, this may not be true.1 Respiratory rates manually recorded by clinical staff may contain information beyond pure physiology, such as a proxy of clinician concern, which may inflate the predictive value. Nursing staff may record standard respiratory rate values for patients that appear to be well (eg, 18) but count actual rates for those patients they suspect have a more severe disease, which is one possible explanation for our findings. In addition, automated assessments are likely to be more sensitive to intermittent fluctuations in respiratory rate associated with patient movement or emotion. This might explain the improved accuracy at higher rates for manually recorded vital signs.
Although limited by its small sample size, our results have important implications for patient monitoring and early warning scores designed to identify high-risk ward patients given that both simple scores and statistically derived models include respiratory rates as a predictor.4 As hospitals move to use newer technologies to automate vital sign monitoring and decrease nursing workload, our findings suggest that accuracy for identifying high-risk patients may be lost. Additional methods for capturing subjective assessments from clinical providers may be necessary and could be incorporated into risk scores.5 For example, the 7-point subjective Patient Acuity Rating has been shown to augment the Modified Early Warning Score for predicting ICU transfer, rapid response activation, or cardiac arrest within 24 hours.6
Manually recorded respiratory rate may include information beyond pure physiology, which inflates its predictive value. This has important implications for the use of automated monitoring technology in hospitals and the integration of these measurements into early warning scores.
The authors thank Pamela McCall, BSN, OCN for her assistance with study implementation, Kevin Ig-Izevbekhai and Shivraj Grewal for assistance with data collection, UCM Clinical Engineering for technical support, and Timothy Holper, MS, Julie Johnson, MPH, RN, and Thomas Sutton for assistance with data abstraction.
Dr. Churpek is supported by a career development award from the National Heart, Lung, and Blood Institute (K08 HL121080) and has received honoraria from Chest for invited speaking engagements. Dr. Churpek and Dr. Edelson have a patent pending (ARCD. P0535US.P2) for risk stratification algorithms for hospitalized patients. In addition, Dr. Edelson has received research support from Philips Healthcare (Andover, MA), research support from the American Heart Association (Dallas, TX) and Laerdal Medical (Stavanger, Norway), and research support from EarlySense (Tel Aviv, Israel). She has ownership interest in Quant HC (Chicago, IL), which is developing products for risk stratification of hospitalized patients. This study was supported by a grant from Philips Healthcare in Andover, MA. The sponsor had no role in data collection, interpretation of results, or drafting of the manuscript.