Sepsis affects over 1 million Americans annually, resulting in significant morbidity, mortality, and costs for hospitalized patients.1-4 There is an increasing interest in policy-oriented approaches to improving sepsis care at both the state and national levels.5,6 The most prominent policy is the Centers for Medicare and Medicaid Services (CMS) Sepsis CMS Core (SEP-1) program, which was formally implemented in October 2015; the program mandates that hospitals report their compliance with a variety of sepsis treatment processes (Table 1). Academic quality experts generally applaud the increased attention to sepsis but are concerned that the measure’s design and specifications advance beyond the existing evidence base.7,8 However, remarkably little is known about how front-line hospital quality officials perceive the program and how they are responding or not responding, to the new requirements. This knowledge gap is a critical barrier to evaluating the program’s practical impact on sepsis treatment and outcomes.
Study Design, Setting, and Subjects
We conducted a qualitative study by using semistructured telephone interviews with hospital quality officers in the United States. We targeted hospital quality officers because they are in a position to provide overarching insights into hospitals’ perceptions of and responses to the SEP-1 program. We enrolled quality officers at general, short-stay, nonfederal acute care hospitals because those are the hospitals to which the SEP-1 program applies. We generated a stratified random sample of hospitals by using 2013 data from Medicare’s Healthcare Cost and Reporting Information System (HCRIS) database.10 We stratified by size (greater than or less than 200 total beds), teaching status (presence or absence of any resident physician trainees), and ownership (for-profit vs nonprofit), creating 8 mutually exclusive strata. This sampling frame was designed to ensure representativeness from a broad range of hospital types, not to enable comparisons across hospital types, which is outside the scope of qualitative research.
Within strata, we contacted hospitals in a random order by phone using the primary number listed in the HCRIS database. We asked the hospital operator to connect us to the chief quality officer or an appropriate alternative hospital administrator with knowledge of hospital quality-improvement activities. We limited participation to 1 respondent per hospital. We did not offer any specific incentives for participation.
The study was approved by the University of Pittsburgh Institutional Review Board with a waiver of signed informed consent.
Interviews were conducted by a trained research coordinator between February 2016 and October 2016. Interviews were conducted concurrently with data analysis by using a constant comparison approach.11 The constant comparison approach involves the iterative refinement of themes by comparing the existing themes to new data as they emerge during successive interviews. We chose a constant comparison approach because we wanted to systematically describe hospital responses to SEP-1 rather than specifically test individual hypotheses.11 As is typical in qualitative research, we did not set the sample size a priori but instead continued the interviews until we achieved thematic saturation.12,13
The interview script included a mix of directed and open-ended questions about respondents’ perspectives of and hospital responses to the SEP-1 program. The questions covered the following 4 domains: hospitals’ sepsis quality-improvement initiatives before and after the Medicare reporting program, reception of the hospital responses, the approach to data abstraction and reporting, and the overall impressions of the program and its impact.6-8,14 We allowed for updates and revisions of the interview guide as necessary to explore any new content and emergent themes. We piloted the interview guide on 2 hospital quality officers at our institution and then revised its structure again after interviews with the initial 6 hospitals. The complete final interview guide is available in the supplemental digital content.
Interviews were audio recorded, transcribed, and loaded onto a secure server. We used NVivo 11 (QSR International, Cambridge, Massachusetts) for coding and analysis. We iteratively reviewed and thematically analyzed the transcripts for structural content and emergent themes, consistent with established qualitative methods.15 Three investigators reviewed the initial 20 transcripts and developed the codebook through iterative discussion and consensus. The codes were then organized into themes and subthemes. Subsequently, 1 investigator coded the remaining transcripts. The results are presented as a series of key themes supported by direct quotes from the interviews.
Perspectives on SEP-1
Responses to SEP-1
Efforts to Collect Data for SEP-1 Reporting
Respondents reported challenges in reliably and validly measuring and reporting data for the SEP-1 program. First, patient identification and the measurement of treatment processes depends largely on manual medical record review, which is subject to variation across coders. This presents a particular challenge because the clinical definition of sepsis itself is in evolution,1 creating the possibility that treating physicians could identify a given patient as having sepsis or septic shock based on the most up-to-date definitions but not based on the measure’s specifications or vice versa. Second, each case requires up to an hour of manual medical record review and patients who develop sepsis during prolonged hospitalizations can require several hours or more, which is an unprecedented length of time to spend abstracting data for a single measure.
In addressing these measurement challenges, investment in human resources is the rule. No respondent reported automating abstraction of all the SEP-1 data elements, underscoring concerns regarding the measurement burden of the SEP-1 program.7,8,14 Rather, hospitals with sufficient financial resources frequently employ full-time data abstractors and individuals responsible for ongoing performance feedback, which facilitates the iterative revision of sepsis quality-improvement initiatives. In contrast, hospitals with fewer resources often rely on contracts with third-party vendors, which delays reporting and complicates efforts to use the data for individualized performance improvement.
Efforts to Coordinate Hospital Responses Across Care Teams
Complying with the measure involves the longitudinal coordination of multiple care teams across different units, so planning and executing local hospital responses required interdepartmental and multidisciplinary stakeholder involvement. Respondents were uncertain about the ideal strategy to coordinate these quality-improvement efforts, yielding iterative changes to electronic health records (EHRs), education programs, and data collection methods. This “learning by doing” is necessary because no prior CMS quality measure is as complex as SEP-1 or as varied in the sources of data required to measure and report the results. By requiring hospitals to improve coordination of care throughout the hospital, SEP-1 presents a quality-improvement and measurement challenge that may ultimately drive innovation and better patient care.
Efforts to Improve Sepsis Diagnosis
Several hospitals are implementing sepsis screening and alerts to speed sepsis recognition and meet the measure’s time-sensitive treatment requirements. An example of a less-intensive alert is one hospital’s lowering of the threshold for lactate values that are viewed as “critical” (and thus requiring notification of the bedside clinician). Examples of more resource-intensive alerts included electronic screening for vital sign abnormalities that trigger bedside assessment for infection as well as nurse-driven manual sepsis screening tools.
Frequently, these more intensive efforts faced barriers to successful implementation related to the broader issues of performance measurement rather than the specifics of SEP-1. EHRs generally lacked built-in electronic screening capacity, and few hospitals had the resources required for customized EHR modification. Manual screening required nurses to spend time away from direct patient care. For both electronic and manual screening, respondents expressed concern about how these new alerts would fit into a care landscape already inundated with alerts, alarms, and care notifications.16,17
Efforts to Improve Sepsis Treatment
Many hospitals are implementing sepsis-specific treatment protocols and order sets designed to help meet SEP-1 treatment specifications. In hospitals and health systems with preexisting sepsis quality-improvement efforts, SEP-1 stimulated adaptation and acceleration of their efforts; in hospitals without preexisting sepsis-specific quality improvement, SEP-1 inspired de novo program development and implementation. These programs were wide ranging. Several hospitals implemented a process by which an initially elevated lactate value automates an order for a repeat lactate level, facilitating an assessment of the clinical response to treatment. Other examples include triggers for sepsis-specific treatment protocols and checklists that bedside nurses can begin without initial physician oversight. In 1 hospital, sepsis alerts triggered by emergency medical first responders initiate responses prior to hospital arrival in a manner analogous to prehospital alerts for myocardial infarction and stroke.18,19
Efforts to implement these protocols encountered several common challenges. Physicians were often resistant to adopting inflexible treatment rules that did not allow them to tailor therapies to individual patients. Furthermore, even protocols and order sets that worked in 1 setting did not necessarily generalize throughout the hospital or health system, reflecting the difficulty in implementing a highly specified measure across diverse treatment environments.
Efforts to Manage Clinician Attitudes Toward SEP-1 Implementation
In addition to addressing clinicians’ behaviors, hospitals sought to address stakeholders’ attitudes when those attitudes created barriers to SEP-1 implementation. First, hospitals frequently faced a lack of buy-in from clinicians who were resistant to the idea of protocolized care in general and who were specifically skeptical that initiatives designed to increase clinical documentation would drive improvements in patient-centered outcomes. Second, respondents had to confront a hierarchical hospital culture, which manifests not only in clinical care, but also in the quality-improvement infrastructure. Many respondents reported that physicians were more receptive to performance feedback from fellow physicians rather than nonphysician quality administrators.
Respondents described a range of approaches to counteract these attitudes. First, hospitals deployed department- and profession-specific “champions” to provide peer-to-peer performance feedback supported by data demonstrating a link between process improvements and patient outcomes. Second, many respondents noted that the addition of new clinical staff, who were often younger and more receptive to new initiatives, could alter a hospital’s quality culture; in smaller hospitals, just a few individuals could significantly alter the dynamic. Finally, when other efforts failed, some respondents indicated that top-down administrative support could persuade resistant individuals to change their approach. However, this solution worked best with employed physicians and was less effective with independent physician groups without direct financial ties to hospital performance. These efforts to overcome negative attitudes toward SEP-1 implementation required individuals’ time and energy, leading to frustration at times and adding to the resources required to comply with the program.
Planning for the Future of SEP-1
Respondents anticipate that performance of the SEP-1 measure will eventually become publicly reported and incorporated into value-based purchasing calculations. Hospitals are therefore seeking greater interaction with CMS as it makes iterative revisions to the measure because respondents expect that their hospitals’ level of performance, rather than just the act of participating, will affect hospital finances. Respondents expressed a desire for more live, interactive educational sessions with CMS moving forward, rather than limiting the opportunities for clarification to online comment forums or statements elsewhere in the public record. In addition, respondents hope that public reporting and pay-for-performance could be delayed to allow more time to work out the “kinks” in measurement and reporting.
We conducted semistructured telephone interviews with quality officers in U.S. hospitals in order to understand hospitals’ perceptions of and responses to Medicare’s SEP-1 sepsis quality-reporting program. Hospitals are struggling with the program’s complexity and investing considerable resources in order to iteratively revise their responses to the program. However, they generally believe that the program is bringing much-needed attention to sepsis diagnosis and treatment. These findings have several implications for the SEP-1 measure in particular and for hospital-based quality measurement and pay-for-performance policies in general.
First, we demonstrate that SEP-1 consistently requires a substantial investment of resources from hospitals already struggling under the weight of numerous local, state, and national quality-reporting and improvement programs.14,20,21 In aggregate, these programs can stretch hospitals’ resources to their limit. Respondents universally reported that the SEP-1 program is requiring dedicated staff to meet the data abstraction and reporting requirements as well as multicomponent quality-improvement initiatives. In the absence of well-established roadmaps for improving sepsis care, these sepsis quality-improvement efforts require experimentation and iterative revision, which can contribute to fatigue and frustration among quality officers and clinical staff. This process of innovation inherently involves successes, failures, and the risk of harm and opportunity costs that strain hospital resources.
Second, our study indicates how SEP-1 could exacerbate existing inequalities in our health system. Sepsis incidence and mortality are already higher in medically underserved regions.22 Given the resources required to respond to the SEP-1 program, optimal performance may be beyond the reach of smaller hospitals, or even larger hospitals, whose resources are already stretched to their limits. Public reporting and pay-for-performance can be adisadvantage to hospitals caring for underserved populations.23,24 To the extent that responding to sepsis-oriented public policy requires resources that certain hospitals cannot access, these policies could exacerbate existing health disparities.
Third, our findings highlight some specific ways that CMS could revise the SEP-1 program to better meet the needs of hospitals and improve outcomes for patients with sepsis. Primarily, although the program’s current specifications take an “all-or-none” approach to treatment success, a more flexible approach, such as a weighted score or composite measure that combines processes and outcomes,25,26 could allow hospitals to focus their efforts on those components of the bundle with the strongest evidence for improved patient outcomes.27 Second, policy makers need to reconcile the 2 existing clinical definitions for sepsis.1,28 CMS has already stated its plans to retain the preexisting sepsis definition,29 but this does not change the reality that frontline providers and quality officials face different, and at times conflicting, clinical definitions while caring for patients. Finally, current implementation challenges may support a delay in moving the measure toward public reporting and pay-for-performance. Hospitals are already responding to the measure in a substantial way, providing an opportunity for early quantitative evaluations of the program’s impact that could inform evidence-based revisions to the measure.
Our study has several limitations. First, by interviewing only individual quality officers within each hospital, it is possible that our findings were not representative of the perspectives of other individuals within their hospitals or the hospital as a whole; indeed, to the extent that quality officers “buy in” to quality measurement and reporting, their perspectives on SEP-1 may skew more positive than other hospital staff. Our respondents represented individuals from a range of positions within the quality infrastructure, whereas “hospital quality leaders” are often chief executive officers, chief medical officers, or vice presidents for quality.30 However, by virtue of our purposive sampling approach, we included respondents from a broad range of hospitals and found similar themes across these respondents, supporting the internal validity of our findings. Second, as is inherent in interview-based research, we cannot verify that respondents’ reports of hospital responses to SEP-1 match the actual changes implemented “on the ground.” We are reassured, however, by the fact that many of the perspectives and quality-improvement changes that respondents described align with the opinions and suggestions of academic quality experts, which are informed by clinical experience.6-8 Third, while respondents believe that hospital responses to SEP-1 are contributing to improvements in treatment and outcomes, we do not yet have robust objective data to support this opinion or to evaluate the association between quality officers’ perspectives and hospital performance. A quantitative evaluation of the clinical impact of SEP-1, as well as the relationship between hospital performance and quality officers’ perspectives on the measure, are important areas for future research.
In a qualitative study of hospital responses to Medicare’s SEP-1 program, we found that hospitals are implementing changes across a variety of domains and in ways that consistently require dedicated resources. Giving hospitals the flexibility to focus on treatment processes with the most direct impact on patient-centered outcomes might enhance the program’s effectiveness. Future work should quantify the program’s impact and develop novel approaches to data abstraction and quality improvement.
Aside from federal funding, the authors have no conflicts of interest to disclose. The authors received funding from the National Institutes of Health (IJB, F32HL132461) (JMK, K24HL133444). This work was submitted as an abstract to the 2017 American Thoracic Society International Conference, May 2017.