88040994791Infect Control Hosp EpidemiolInfect Control Hosp EpidemiolInfection control and hospital epidemiology0899-823X1559-683423739071398174110.1086/670999NIHMS569874ArticleVariations in Identification of Healthcare-Associated InfectionsKellerSara C.MD, MPH1LinkinDarren R.MD, MSCE2FishmanNeil O.MD3LautenbachEbbingMD, MPH, MSCE2Center for Healthcare Improvement and Patient Safety, Division of Infectious Diseases, University of Pennsylvania Perelman School of Medicine, Philadelphia, PennsylvaniaCenter for Clinical Epidemiology and Biostatistics, Division of Infectious Diseases, University of Pennsylvania Perelman School of Medicine, Philadelphia, PennsylvaniaDivision of Infectious Diseases, University of Pennsylvania Perelman School of Medicine, Philadelphia, PennsylvaniaAddress correspondence to: Sara C. Keller, MD, MPH, Center for Healthcare Improvement and Patient Safety, Division of Infectious Diseases, University of Pennsylvania Perelman School of Medicine, 230 North 21st Street, Unit 907, Philadelphia, PA 19103 (kellersa@uphs.upenn.edu)3420142152013720130172014347678686© 2013 by The Society for Healthcare Epidemiology of America. All rights reserved.2013OBJECTIVE

Little is known about whether those performing healthcare-associated infection (HAI) surveillance vary in their interpretations of HAI definitions developed by the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN). Our primary objective was to characterize variations in these interpretations using clinical vignettes. We also describe predictors of variation in responses.

DESIGN

Cross-sectional study.

SETTING

United States.

PARTICIPANTS

A sample of US-based members of the Society for Healthcare Epidemiology of America (SHEA) Research Network.

METHODS

Respondents assessed whether each of 6 clinical vignettes met criteria for an NHSN-defined HAI. Individual- and institutional-level data were also gathered.

RESULTS

Surveys were distributed to 143 SHEA Research Network members from 126 hospitals. In total, 113 responses were obtained, representing at least 61 unique hospitals (30 respondents did not identify a hospital); 79.2% (84 of 106 nonmissing responses) were infection preventionists, and 79.4% (81 of 102 nonmissing responses) worked at academic hospitals. Among the 6 vignettes, the proportion of respondents correctly characterizing the vignettes was as low as 27.3%. Combining all 6 vignettes, the mean percentage of correct responses was 61.1% (95% confidence interval, 57.7%–63.8%). Percentage of correct responses was associated with presence of a clinical background (ie, nursing or physician degrees) but not with hospital size or infection prevention and control department characteristics.

CONCLUSIONS

Substantial heterogeneity exists in the application of HAI definitions in this survey of infection preventionists and hospital epidemiologists. Our data suggest a need to better clarify these definitions, especially when comparing HAI rates across institutions.

Healthcare-associated infections (HAIs) are a leading cause of morbidity and mortality in the United States.13 Many states now mandate public reporting of HAIs. The Centers for Medicare and Medicaid Services (CMS) is halting reimbursement for certain HAIs4 and is using improvements in rates of HAIs in determining hospital payments.4,5 There is evidence that the increased scrutiny has lowered HAI rates, particularly of central line–associated bloodstream infections (CLABSIs) in intensive care units (ICUs).6 Given the potential impact on reputation and reimbursement, reliable reporting of HAIs is critical for hospitals, payers, and the public.

In many states, HAIs are reportable through the Centers for Disease Control and Prevention’s (CDC’s) National Healthcare Surveillance Network (NHSN), which develops surveillance definitions.7 Chart review by infection preventionists (IPs) and hospital epidemiologists is the standard to which other HAI surveillance methods are compared.8,9 However, individuals may differ in what they report as HAIs. Researchers have also showed only low to moderate interrater agreement between reported cases and outside review.1013

It is unclear how much of this discrepancy is related to pressure to improve hospital HAI rates, differences in interpretations of NHSN definitions, or other factors. Surveys have suggested that IPs working in pediatric ICUs may be unclear about NHSN CLABSI definitions,14 and some researchers have considered the lack of objectivity in the definition of ventilator-associated pneumonia (VAP) as so problematic that they have proposed a more streamlined definition.15,16

Little research has been done on interpretations of HAI definitions, and most has focused on CLABSI and VAP. Most research to date has been conducted in just one center, healthcare system, state, or region. Very little is known about how those working in hospital settings across the United States—and therefore subject to changes in CMS reimbursement4,5,17—interpret NHSN definitions. Furthermore, to our knowledge no study investigating those performing HAI surveillance has examined the relationships between differences in professional background or work environment characteristics and the interpretation of NHSN definitions.

We conducted a cross-sectional study of those performing HAI surveillance at member hospitals of the Society for Healthcare Epidemiology of America (SHEA) Research Network.18 We used clinical vignettes to investigate variations in interpretations of NHSN HAI definitions among those performing HAI surveillance at US hospitals. We also investigated whether differences in hospital characteristics, infection prevention and control departments, and professional backgrounds are associated with differing interpretations of NHSN HAI definitions.

METHODSStudy Instrument

A questionnaire consisting of 6 clinical vignettes was created to address specific HAIs. Each question consisted of a clinical case, and respondents were asked whether each fulfilled NHSN criteria for a particular HAI. Questions contained relevant clinical and microbiologic information needed for assessment.

Initially, 10 potential vignettes and their answers were reviewed by 3 hospital epidemiologists at the University of Pennsylvania and validated by consensus expert review by 2 NHSN surveillance experts involved in developing the criteria.7 NHSN expert responses were considered the gold standard comparator. Vignettes and other study questions were then piloted with 10 IPs. On the basis of their responses, we decreased the length of the survey by removing 4 clinical vignettes and edited demographic questions for clarity. Clinical vignettes were chosen for removal on the basis of pilot group concerns about clarity.

Two questions addressing potential CLABSIs were designed: one met NHSN criteria, and the other did not. One item fulfilled NHSN criteria for tracheobronchitis, and another fulfilled NHSN criteria for a healthcare-associated Clostridium difficile infection. An item addressing a potential VAP was designed to not fulfill NHSN criteria. All pilot group respondents answered this VAP question correctly, so this was considered a negative control (ie, to ensure that participants were not selecting yes without reading items). Similarly, an item designed to fulfill NHSN criteria for an organ space/surgical site infection was answered correctly by all pilot group participants and was maintained as a positive control (ie, to ensure that participants were not selecting no without reading items). The positive and negative control items were designed to be easily and clearly answered correctly by respondents. The electronic survey was created using SurveyMonkey (http://www.surveymonkey.com/; Appendix).

Additional items addressed the respondents’ professional background (job title, time working in infection prevention, degrees obtained, and presence of a Certification in Infection Control [CIC] credential), infection prevention and control department characteristics (size of the department and whether the state mandates HAI reporting), and hospital characteristics (size, funding status, whether it is a teaching facility, acute care status, and state). Respondents with clinical backgrounds were those who had or were in training for an MD, DO, RN, MSN, BSN, LPN, PA, or NP degree. The presence of state-mandated HAI reporting was self-reported separately for each HAI. For those whose states did not currently mandate HAI reporting, participants were asked whether their state was planning to do so. With the exception of the hospital name, all items required participants to select a response from a list of options.

Study Population

Eligible participants performed HAI surveillance at adult US hospitals in the SHEA Research Network.19 Surveys were distributed to 143 SHEA Research Network contacts via an e-mailed link on February 1, 2012. Contacts were asked to distribute the surveys to those performing surveillance in their hospitals. Electronic reminders were e-mailed to contacts 2 and 6 weeks after the initial request. Responses were collected through March 30, 2012. Anonymity of healthcare facilities was maintained by removal of the hospital name from the data after linking responses from the same hospital. Eligible respondents used NHSN criteria for reporting HAIs. While the survey was distributed to those working at non-pediatric US hospitals, those who later indicated that they worked at pediatric hospitals or outside the United States were excluded. The study was approved by the Institutional Review Board of the University of Pennsylvania.

Statistical Analyses

Answers to each clinical vignette were compared for their concordance with NHSN expert responses. Descriptive statistics showed percentages of respondents marking correct responses to each vignette, as well as mean respondent scores overall. Free-marginal κ scores were also calculated with and without the control items.

Negative binomial models were chosen to describe relationships between the percentage of correct responses (operationalized as the number of correct responses, with the total number of potential responses as an exposure variable) with the presence of a clinical background; attaining a CIC credential; length of time in infection prevention; number of people in the infection prevention and control department; and hospital size, funding source, and teaching status. Analyses were also adjusted for the size of the infection prevention and control department and the presence of or plans for state-mandated reporting. Size of the infection control and prevention department was a priori dichotomized as less than 5 or at least 5 members, years in infection prevention was a priori dichotomized as less than 5 and at least 5, and the hospital size was a priori dichotomized as less than 500 or at least 500 beds. For univariate statistics, analysis of variance or χ2 tests were performed as appropriate. Analyses were considered significant at α < .05.

For multivariate analyses, those whose states currently or were planning to mandate HAI reporting were considered to have mandated reporting. Analyses were performed to explore whether mandated reporting of a particular HAI affected responses to questions involving that particular HAI. Analyses were also performed for clustering between hospitals. Analyses were performed using Stata, version 12.1 (StataCorp).

RESULTS

Surveys were distributed to 143 SHEA Research Network members from 126 hospitals. In total, 113 responses were obtained, representing 61 unique hospitals (30 respondents did not name their hospital). Assuming that the number of respondents not identifying a hospital had a percentage of unique hospitals similar to that among those who did identify their hospital, an additional 22 hospitals would be obtained, giving a hospital response rate of 65.9%.

Of the respondents, 79.2% (84 of 106 nonmissing responses) were IPs, with a mean length of employment in infection prevention of 11.9 years (Table 1). Eighty-one participants (75.7% of 107 nonmissing responses) had a clinical background. Most respondents worked at large acute care academic hospitals. Eighty-nine respondents (78.8% of 108 nonmissing responses) worked in states mandating HAI reporting.

Regarding the 6 vignettes, 97.2% (106 of 109 nonmissing responses) of respondents correctly answered the negative control question, and 97.3% (108 of 111 nonmissing responses) correctly answered the positive control question. Percentage of correct answers for the other items ranged from 87.5% (98 of 112 nonmissing responses for an item involving a CLABSI) to 27.4% (29 of 106 nonmissing responses for an item involving C. difficile). The mean percentage of correct responses (out of 6) was 61.1% (standard deviation, 0.96%; Table 2). The free-marginal κ score over all 6 items was 0.51. If the positive and negative controls were removed, the free-marginal κ score decreased to 0.32.

In unadjusted analyses, number of correct responses was not associated with being an IP, clinical background, hospital size, infection prevention and control department size, or length of time in infection prevention (Table 3). In multivariate analyses, when adjusted for individual and hospital characteristics, percentage of correct responses was not associated with being an IP, years in infection prevention, presence of a CIC credential, presence of state-mandated reporting, hospital size, or infection prevention and control department size (Table 3). However, having a clinical background was associated with answering more questions correctly (adjusted P = .044). There was no association between state-mandated reporting of a particular HAI and the likelihood of correctly characterizing a related clinical vignette (Table 4).

Of those who gave the name of their hospital, most (47 [74.6%]) were the only respondents from their hospital. Other hospitals had between 2 and 5 respondents. Percentage of correct responses per hospital did not differ significantly from the study population overall (data not shown). Within individual hospitals, percentage of correct responses varied among respondents.

Analyses were performed to determine whether participants who did not name their hospital differed from those who did. Those who named their hospital had a higher percentage of correct responses (P = .022). The mean number of correct responses for those who named their hospital was 3.8 (95% confidence interval [CI], 3.6–4.0) of 6, or 63.3%. The mean number of correct responses among those who did not name their hospital was 3.3 (95% CI, 2.8–3.8) of 6, or 55.0%. There were no significant differences in infection prevention and control department characteristics, hospital characteristics, or professional background between those who did and did not name their hospitals.

Analyses also determined the effect of removing the tracheobronchitis item (which is uncommonly targeted in hospital surveillance) from the results. Without this item, the mean number of correct responses was 2.91 (95% CI, 2.76–3.07) of 5, or 58.2%, which was not significantly different. The free-marginal κ score in the absence of the tracheobronchitis item over the 5 remaining items was not significantly different (0.50). When the tracheobronchitis item and the positive and negative controls were removed, the free-marginal κ score decreased to 0.24.

DISCUSSION

By studying how those performing HAI surveillance in hospitals across the United States apply NHSN HAI definitions to clinical vignettes, we sought to model whether variation in interpretations of definitions may exist. While the majority of the respondents correctly identified a positive control as an HAI and a negative control as a non-HAI, rates of respondents correctly identifying other HAIs ranged from a high of 87.5% to a low of 27.4%. The mean percentage of correct responses on the survey was low (61.1%), and interrater reliability was also low between respondents (κ = 0.32, after excluding the control items). The overall percentage of correct responses and overall interrater reliability may be low enough to raise doubts about agreement between those performing HAI surveillance. Presence of a clinical background was the only characteristic identified as associated with more correct responses to survey questions. Research should focus on improving accuracy and reliability in applying HAI surveillance criteria.

Prior studies looking at the reproducibility of NHSN HAI criteria have focused on just one type of HAI, typically CLABSI or VAP.1012,1416 We included items about not just CLABSI and VAP but also C. difficile, tracheobronchitis, and organ space/surgical site infection. Some have focused on the experiences of reporting HAIs in just one state10,11,13,20 or in a few academic medical centers.15,16 We looked at reporting from 28 different states. No studies before ours have included information about respondent workplaces or their training. In addition, 17.9% of our respondents were hospital epidemiologists, a group in which HAI interpretation concordance has not been well studied. While our study was not powered to explore differences between hospital epidemiologist and IP responses to HAI items, we did not see a significant difference in the number of correct responses. Understanding hospital epidemiologist HAI reporting is important and should be an area of future research, as in many hospitals both hospital epidemiologists and IPs discuss reporting of particular HAIs.21

The mean percentage of correct responses on the survey was 61.1%, similar to the 57.1% concordance with NHSN experts seen in a survey investigating CLABSI cases in 18 Australian hospitals.11 Our findings are also similar to a survey of IPs in 10 pediatric ICUs, in which all would have named at least one CLABSI in a manner inconsistent with NHSN definitions.14

In this study, interrater reliability was lower than that seen between CLABSIs reported by 44 Oregon hospitals versus those identified by state health officials (κ = 0.32 vs κ = 0.77)13 but was similar to that seen between CLABSIs reported by Australian hospitals versus those identified by central health officials (κ = 0.31)10 and between CLABSIs categorized by Veterans Affairs IPs (κ = 0.42).12 While precise comparisons are difficult given that these prior studies focused just on CLABSIs, involved actual patient data, and used different scenarios, the data suggest only low to moderate agreement among those performing HAI surveillance. Prior studies have suggested that some may overreport infections,22 while others may feel pressure to underreport HAIs by their place of employment.19 Outside experts may find more HAIs than were reported.20,23 In this study, however, with no consequence to their answers, participants likely felt little external pressure to report or not report certain HAIs. Our study may present a more accurate depiction of respondent understanding of NHSN definitions.

Two items had particularly low percentages of correct responses. First, only 27.4% of participants correctly reported that a clinical vignette did not meet criteria for a healthcare-associated C. difficile infection. Respondents may have struggled with the multiple infection sources in the scenario. Second, only 57.1% of respondents correctly identified that a potential CLABSI, in reality a secondary bloodstream infection from a surgical site infection, should not be reported. Others have commented on the difficulty in identifying NHSN-defined CLABSIs in the presence of other potential infections.14,21 HAI definitions in the presence of more than one potential source of infection may require further clarification.

We did not identify differences in HAI reporting characteristics based on state-mandated reporting requirements, hospital characteristics, or characteristics of infection prevention and control departments, although those with a clinical background were more likely to achieve higher scores. Unfortunately, it is unclear how to improve the interobserver reliability of HAI surveillance. Further studies are first needed to understand what affects the application of HAI criteria.

Problems with standardization of HAI definitions are becoming more apparent. For example, members of the CDC’s Prevention Epicenters Program are considering criteria for ventilator-associated events as a replacement for VAP.15,16,24 Others are trialing electronic surveillance systems for HAIs.2533 A recent study comparing interrater reliability between IPs at Veterans Affairs hospitals and through a laboratory-based algorithm actually found a higher concordance between IPs and the algorithm than among IPs.12 Efforts such as these should continue.

Our study also offers an example of the use of a powerful new research tool for healthcare epidemiology, the SHEA Research Network. Prior studies using this resource have included a survey of influenza epidemic preparedness34 and a retrospective multicenter cohort study of member hospitals on the effect of CMS catheter-associated urinary tract infection reimbursement rule changes on antimicrobial prescribing practices.35 Future studies in healthcare epidemiology could continue to take advantage of this research consortium.

Our study has some potential limitations. The study population may not represent all those performing surveillance. To ensure that we were capturing those who may be affected by changes in CMS reimbursements,4,5,17 we focused on US hospitals that used NHSN criteria to report HAIs. Our results may not be applicable to those who work outside the United States or do not use NHSN criteria. Furthermore, we excluded those working at pediatric-only hospitals, so results may not apply to those performing surveillance in pediatric hospitals. In addition, some of our respondents came from the same hospital. However, the bias this would have created would have been toward the null; that is, the actual variation between those performing surveillance may have been even higher.

We used hospitals enrolled in the SHEA Research Network for our survey so as to reach hospitals in different states. However, those performing HAI surveillance in the SHEA Research Network may not represent all those performing HAI surveillance. Hospitals in the SHEA Research Network are more likely than other hospitals to be larger, to be academic, and to have more members in their infection prevention and control departments.18

Finally, decision making in this survey may not apply to real-world situations. Reading a short paragraph to determine the presence of an HAI is not the same as a thorough chart review. Responding anonymously to a clinical vignette is not the same as making a decision with potential consequences to the hospital, clinician, and patient. Responses to our survey may not precisely match behavior, although we did seek feedback to make vignettes as realistic as possible. In addition, to decrease the length of the survey we included only a small number of vignettes, which may not be completely representative of all HAIs.

We showed low interrater reliability between those performing HAI surveillance in our study. Rates this discordant in practice could dramatically affect not only hospital reputations but also hospital reimbursement in the Value-Based Purchasing Program.4,5,17 Prior to the implementation of CMS HAI reimbursement changes, more reproducible definitions of HAIs—or even new approaches to HAI surveillance, such as electronic surveillance—are urgently needed.

We thank Anthony Harris, MD, MPH, and Daniel Morgan, MD, of the Society for Healthcare Epidemiology of America (SHEA) Research Network for their helpful comments on the survey and Lisa Pineles, MS, also of the SHEA Research Network, for her help in distributing the survey. We thank members of the National Healthcare Safety Network for reviewing the survey. We thank Carol Samel, CIC, and Becky Fitzpatrick, DPN, RN, of the Hospital of the University of Pennsylvania Department of Hospital Epidemiology and Infection Control and Prevention for their assistance in finding scenarios for the survey. We also thank Judy Shea, PhD, for her helpful comments on survey development. We thank the infection preventionists of the University of Pennsylvania Health System for working as our focus group. We thank the members of the SHEA Research Network for participating in the survey.

Financial support. This work was supported in part by an unrestricted Institutional Comparative Effectiveness Research Mentored Career Development Award from the National Institutes of Health (KM1, grant [GIM] 400-4239-4-555854-XXXX-2446-2192 to S.C.K.) and in part by the National Institutes of Health (grant K24 AI080942 to E.L.) and the Prevention Epicenters Program of the Centers for Disease Control and Prevention (grant U54-CK000163 to E.L.).

Potential conflicts of interest. All authors report no conflicts of interest relevant to this article. All authors submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and the conflicts that the editors consider relevant to this article are disclosed here.

Presented in part: ID Week Scientific Meeting; San Diego, California; October 17–21, 2012.

ScottRDIIThe Direct Medical Costs of Healthcare-Associated Infections in US Hospitals and the Benefits of PreventionCoordinating Center for Infectious Diseases, Centers for Disease Control and PreventionPublished2009http://www.cdc.gov/hai/pdfs/hai/scott_costpaper.pdfAccessed July 27, 2011KlevensRMEdwardsJRRichardsCLEstimating health care–associated infections and deaths in US hospitals, 2002Public Health Rep200712216016617357358HollenbeakCSMurphyDDunaganWCFraserVJNonrandom selection and the attributable cost of surgical-site infectionsInfect Control Hosp Epidemiol20022317718212002231StonePWGliedSAMcNairPDCMS changes in reimbursement for HAIs: setting a research agendaMed Care20104843343920351584HHS action plan to prevent healthcare-associated infectionsUS Department of Health and Human Serviceswebsite http://www.hhs.gov/ash/initiatives/hai/actionplan/hhs_hai_action_plan_final_06222009.pdfAccessed May 9, 2013PronovostPJMarstellerJAGoeschelCAPreventing bloodstream infections: a measurable national success story in quality improvementHealth Aff201130628634HoranTCAndrusMDudeckMACDC/NHSN surveillance definition of health care–associated infection and criteria for specific types of infection in the acute care settingAm J Infect Control20083630933218538699AgodiAAuxiliaFBarchittaMBuilding a benchmark through active surveillance of intensive care unit–acquired infections: the Italian network SPIN-UTIJ Hosp Infect20107425826519914739MasiaMDBarchittaMLiperiGCantuAPAlliataEValidation of intensive care unit–acquired infection surveillance in the Italian SPIN-UTI networkJ Hosp Infect20107613914220633960McBrydeESBrettJRussoPLWorthLJBullALRichardsMJValidation of statewide surveillance system data on central line–associated bloodstream infection in intensive care units in AustraliaInfect Control Hosp Epidemiol2009301045104919803720WorthLJBrettJBullALMcBrydeESRussoPLRichardsMJImpact of revising the National Nosocomial Infection Surveillance System definition for catheter-related bloodstream infection in ICU: reproducibility of the National Healthcare Safety Network case definition in an Australian cohort of infection control professionalsAm J Infect Control20093764364819589619MayerJGreeneTHowellJAgreement in classifying bloodstream infections among multiple reviewers conducting surveillanceClin Infect Dis20125536437022539665OhJYCunninghamMCBeldavsZGStatewide validation of hospital-reported central line–associated bloodstream infections: Oregon, 2009Infect Control Hosp Epidemiol201233543944522476268NiederMFThe harder you look, the more you find: catheter-associated bloodstream infection surveillance variabilityAm J Infect Control201038858559520868929KlompasMKhanYKleinmanKMulticenter evaluation of a novel surveillance paradigm for complications of mechanical ventilationPLoS ONE20116e1806221445364KlompasMKleinmanKKhanYRapid and reproducible surveillance for ventilator-associated pneumoniaClin Infect Dis20125437037722247300Center for Medicare and Medicaid Services, Department of Health and Human ServicesMedicare program; hospital in-patient value-based purchasing program. Final ruleFed Regist20117688264902654721548401LautenbachEExpanding the research agenda for infection prevention: the SHEA Research ConsortiumPresented at: Society for Healthcare Epidemiology of Association Annual Scientific MeetingApril 1–4, 2011DallasVermundSHFawalHEmerging infectious diseases and professional integrity: thoughts for the new millenniumAm J Infect Control19992749749910586153BackmanLAMelchreitRRodriguezRValidation of the surveillance and reporting of central line–associated bloodstream infection data to a state health departmentAm J Infect Control20103883283821093699FraserTGGordonSMCLABSI in immunocompromised patients: a valuable patient centered outcome?Clin Infect Dis2011521446145021628486EhrenkranzNJRichterEIPhillipsPMShultzLMAn apparent excess of operative site infections: analyses to evaluate false-positive diagnosesInfect Control Hosp Epidemiol1995167127168683089EmoriTGEdwardsJRCulverDHAccuracy of reporting nosocomial infections in intensive-care-unit patients to the National Nosocomial Infections Surveillance System: a pilot studyInfect Control Hosp Epidemiol1998193083169613690Centers for Disease Control and PreventionImproving Surveillance for Ventilator-Associated Events in Adultshttp://www.cdc.gov/nhsn/PDFs/vae/CDC_VAE_CommunicationsSummary-for-compliance_20120313.pdf. Published 2012Accessed August 3, 2012LinMYHotaBKhanYMQuality of traditional surveillance for public reporting of nosocomial bloodstream infection ratesJAMA20103042035204121063013WoeltjeKFMcMullenKMButlerAMGorisAJDohertyJAElectronic surveillance for healthcare-associated central line–associated bloodstream infections outside the intensive care unitInfect Control Hosp Epidemiol2011321086109022011535HotaBLinMDohertyJAFormulation of a model for automating infection surveillance: algorithmic detection of central-line associated bloodstream infectionJ Am Med Inform Assoc201017424820064800RubinMAMayerJGreeneTAn agent-based model for evaluating surveillance methods for catheter-related bloodstream infectionAMIA Annu Symp Proc2008200863163518999291PlattRYokoeDSSandsKEAutomated methods for surveillance of surgical site infectionsEmerg Infect Dis2001721221611294709YokoeDSNoskinGACunninghamSMEnhanced identification of postoperative infections among inpatientsEmerg Infect Dis2004101924193015550201WoeltjeKFButlerAMGorisAJAutomated surveillance for central line–associated bloodstream infection in intensive care unitsInfect Control Hosp Epidemiol20082984284618713052LealJGregsonDBRossTGlemonsWWChurchDLLauplandKBDevelopment of a novel electronic surveillance system for monitoring of bloodstream infectionsInfect Control Hosp Epidemiol20103174074720470039CoelloRBranniganELawsonWWickensHHolmesAPrevalence of healthcare device–associated infection using point prevalence surveys of antimicrobial prescribing and existing electronic dataJ Hosp Infect20117826426521652112LautenbachESaintSHendersonDKHarrisADInitial response of health care institutions to emergence of H1N1 influenza: experiences, obstacles, and perceived future needsClin Infect Dis20105052352720064038MorganDLMeddingsJSaintSDoes nonpayment for hospital-acquired catheter-associated urinary tract infections lead to overtesting and increased antimicrobial prescribing?Clin Infect Dis20125592392922700826APPENDIX. CLINICAL VIGNETTES DISTRIBUTED WITH THE SURVEYScenario 1

A 60-year-old patient with acute myeloid leukemia (AML) has been on the oncology floor for the last 2 weeks while receiving chemotherapy. He now has an absolute neutrophil count of 0 cells/μL. He spikes a fever to 38.5° centigrade (101.3° Fahrenheit). Blood cultures are sent, as well as a urinalysis with urine culture. He has no abdominal pain, diarrhea, or vomiting but was complaining of nausea on the day of the fever. He has a peripherally inserted central catheter (PICC) through which he gets chemotherapy. Urinalysis and urine culture are negative. Two out of 4 blood culture bottles sent (1 from each set of 2 blood culture bottles) grow Enterococcus.

Does he have an NHSN-defined CLABSI?

Yes

No

Scenario 2

A 65-year-old woman is intubated prior to a repair of an ascending aortic aneurysm and receives 30 units of blood products during the procedure. After the procedure, she is taken to the intensive care unit. She remains ventilated. On post-op day 3, she has minimal secretions from her endotracheal tube and she is afebrile, but she also has an elevated white blood cell count of 16. Her chest X-ray reads “bilateral interstitial infiltrates, consistent with pulmonary edema or atelectasis. Pneumonia cannot be ruled out.” Blood cultures are negative, urinalysis is negative, she has no diarrhea, and the surgical site is healing well. Forty-eight hours later, a repeat chest X-ray reads “resolved interstitial infiltrates.”

Does she have an NHSN-defined VAP?

Yes

No

Scenario 3

A 55-year-old patient has been ventilated through a tracheostomy in the surgical intensive care unit for the past 2 weeks. He develops a fever of 38.5° Celsius (101.3° Fahrenheit). A workup is initiated, including blood cultures, urinalysis, and chest X-ray. All are negative. He has new moderate secretions and slight rales. A tracheal aspirate is sent, and a Gram stain shows many WBCs and few GPCs. The next day, his tracheal aspirate grows many Staphylococcus aureus, moderate oral flora, and few yeast.

Does he have an NHSN-defined tracheobronchitis?

Yes

No

Scenario 4

A 50-year-old patient has been living in a long-term acute care facility and ventilated through a tracheostomy for 4 months. He has end-stage renal disease and is dialyzed through a tunneled catheter. He is oliguric but urinates through a condom catheter. At the long-term acute care facility, he had been complaining of abdominal pain. He presents to your hospital for a planned percutaneous gastrostomy tube placement. During the procedure, he had 1 dose of cefazolin but had no antibiotics afterward. Three days after the procedure, he developed a fever of 38.3° centigrade (100.9° Fahrenheit) and a white blood cell count of 15 cells/μL. He had new diarrhea over the preceding 12 hours, which started after initiating tube feeds through his new PEG, and had abdominal pain, which he attributed to the new PEG. Prior to this, he had not been on any laxatives and had had normal-consistency bowel movements. A C. difficile was sent and was positive. A urinalysis collected via straight catheterization showed 2+ leukocyte esterase, too-numerous-to-count white blood cells, and moderate bacteria. His urine culture grew 60,000 colony-forming units per milliliter of Escherichia coli.

Does he have an NHSN-defined C. difficile infection associated with the SICU?

Yes

No

Scenario 5

A 63-year-old woman with non–small cell lung cancer presented with a T5 compression fraction and was taken to the OR for a T5 corpectomy and T2–T8 fusion with pedicle screws and rods. One week later she became confused, tachycardic, and febrile. Her wound showed serosanguinous drainage, but there was no surrounding erythema or pain. She had a foley catheter in place for urinary retention and a central line in place for chemotherapy. A urinalysis showed many bacteria, moderate white blood cells, and 2+ leukocyte esterase. A urine culture grew 50,000 colony-forming units of Pseudomonas. A nonsterile superficial swab of the wound grew MRSA. Blood cultures also grew MRSA in 3 of 4 blood cultures collected peripherally in 2 sets 2 hours apart. A MRI of her thoracic spine showed a superficial fluid collection at the incision.

Does she have an NHSN-defined CLABSI?

Yes

No

Scenario 6

A 50-year-old woman with end-stage liver disease presented for an orthotopic liver transplant. The operation was uneventful, and she was started on immunosuppressing medications. She was discharged home 2 weeks after the operation. Five days after discharge, she presented to the emergency department complaining of severe abdominal pain. She had a fever of 39° centigrade (102.0° Fahrenheit) and was hypotensive with a blood pressure of 85/54. Imaging of her abdomen showed a large fluid collection at the surgical site. She was taken to the operating room, where purulent fluid was identified around the anastomosis. The collection was drained, and she was admitted to the surgical intensive care unit. On Gram stain, a sample of the purulent fluid showed many white blood cells and many gram-negative rods. The culture grew Escherichia coli. The surgical team was concerned for anastamotic breakdown with resulting necrosis.

Does she have an NHSN-defined organ space/surgical site infection?

Yes

No

Respondent Professional Background, Hospital Epidemiology and Infection Control and Prevention Department, and Hospital Characteristics, of 113 Total Respondents

CharacteristicValue
Title or position (missing = 7)
 Infection preventionist84 (79.3)
 Hospital epidemiologist19 (17.9)
 Other3 (2.8)
CIC or in training (missing = 19)62 (76.6)
Degree or training (missing = 7)a
 Nursing (RN, BSN, MSN)62 (58.5)
 Public health, epidemiology, health administration, or policy36 (34.0)
 Physician (MD, DO)18 (17.0)
 Medical technology14 (13.2)
 Microbiology7 (6.6)
Any clinical background or training (missing = 6)81 (75.7)
Years in infection prevention, mean ± SD11.9 ± 8.4
Size of hospital (missing = 9)
 Less than 200 beds16 (15.4)
 200–499 beds25 (24.1)
 At least 500 beds63 (60.6)
Type of hospital (missing = 8)
 Acute care101 (96.2)
 Not for profit, nongovernmental81 (77.1)
 State or local governmental12 (11.4)
 For profit6 (5.7)
 Federal governmental6 (5.7)
Teaching hospital (physician, dental, or podiatry residents; missing = 11)81 (79.4)
Size of infection control and prevention department, mean ± SD (missing = 8)5.9 ± 3.8
State reporting requirements (missing = 5)b
 Any current state-mandated reporting89 (82.4)
 SSI73 (64.6)
 BSI82 (72.6)
 Pneumonia18 (15.9)
 Gastrointestinal infection9 (8.0)
Planned state-reporting requirements (missing = 5)c6 (5.3)
No current reporting or plan to report healthcare-associated infections7 (6.2)

NOTE. Data are no. (% of nonmissing responses), unless otherwise indicated. BSI, bloodstream infection; CIC, Certification in Infection Control credential; SD, standard deviation; SSI, surgical site infection.

For these items, participants were asked to answer yes or no for each individual degree or training option. Missing responses were considered to be answers of no. Seven participants did not respond to any items and so were considered to have had missing responses for these items. One respondent stated that he or she was a nurse practitioner or physician assistant.

Only 3–6 (2.7%–5.3%) respondents reported other infections, such as bone and joint infection; cardiovascular system infection; eye, ears, nose, and throat infection; lower respiratory tract infection; skin and soft-tissue infection; and systemic infection.

Included urinary tract infection, SSI, BSI, and pneumonia.

Percentages of Correct Responses for Each Clinical Vignette, of 113 Total Respondents

QuestionHealthcare-associated infectionAnswering yesAnswering no
1CLABSI (missing = 1)98 (87.5)14 (12.5)
2VAP (missing = 4)3 (2.8)106 (97.2)a
3Tracheobronchitis (missing = 7)91 (85.8)15 (14.2)
4Clostridium difficile (missing = 7)77 (72.6)29 (27.4)
5CLABSI (missing = 1)48 (42.9)64 (57.1)
6Organ space/surgical site infection (missing = 2)108 (97.3)b3 (2.7)
 % of correct responses, mean ± SD61.1 ± 0.96

NOTE. Data are no. (%), unless otherwise indicated. Correct responses were defined as such by a team of National Healthcare Surveillance Network experts and are shown in boldface type. CLABSI, central line–associated bloodstream infection; SD, standard deviation; VAP, ventilator-associated pneumonia.

Created as a negative control (ie, intended to be answered with a response of “no, this is not a reportable ventilator-associated pneumonia”).

Created as a positive control (ie, intended to be answered with a response of “yes, this is a reportable organ space/surgical site infection”).

Associations between Professional Background, Infection Prevention and Control Department, and Hospital Characteristics with the Mean Number of Correct Responses

VariableNo. of correct responses, mean ± SDP
UnadjustedAdjusteda
Job title.18.11
 Infection preventionist3.62 ± 0.89
 Hospital epidemiologist3.95 ± 1.22
Clinical background.063.044
 Yes (n = 81)3.78 ± 0.10
 No (n = 26)3.38 ± 0.18
Infection control and prevention department size.29.16
 Less than 5 persons (n = 62)3.60 ± 0.14
 At least 5 persons (n = 43)3.79 ± 0.11
Hospital size.53.74
 Less than 500 beds (n = 41)3.63 ± 1.56
 At least 500 beds (n = 63)3.75 ± 0.10
Years in infection control and prevention.61.62
 Less than 5 years (n = 23)3.65 ± 0.18
 At least 5 years (n = 82)3.76 ± 0.10
CIC.77.79
 Yes (n = 62)3.77 ± 0.11
 No (n = 32)3.72 ± 0.15
Presence of or plans for state-mandated healthcare-associated infection reporting.071.55
 Yes (n = 95)3.71 ± 0.095
 No (n = 18)3.33 ± 0.28

NOTE. SD, standard deviation.

Negative binomial models were used for analyses. Multivariate analyses adjusted for the size of the hospital, presence of a clinical background, presence of a Certification in Infection Control (CIC) credential, years of experience in infection control and prevention, size of the infection prevention and control department, presence of or plans for state-mandated reporting, and whether the respondent had recorded the name of the hospital. Mean number of correct responses in each category (of a possible 6) are presented. P values are for the difference between the 2 groups; significance is indicated by boldface type. Analyses are considered statistically significant at α =.05.

Associations between Choosing a Particular Response to a Clinical Vignette and the Presence of or Plans for State-Mandated Reporting of the Relevant Healthcare-Associated Infection (HAI)

Item no., HAICorrect responses from states
P
With presence of or plans for mandated reporting for the relevant HAIWithout plans for mandated reporting for the relevant HAI
Question 1, CLABSI (n = 85a)74 (87.1)24 (88.9).80
Question 2, VAP (n = 19a)18 (94.7)88 (97.8).46
Question 3, tracheobronchitis (n = 3a)3 (100)88 (85.4).48
Question 4, Clostridium difficile (n = 8a)3 (37.5)98 (26.5).50
Question 5, CLABSI (n = 85a)49 (57.6)15 (55.6).85
Question 6, organ space/surgical site infection (n = 74a)73 (98.6)35 (94.6).21

NOTE. Data are no. (%). Percentage of correct responses is also presented for responses where reporting of the particular HAI is or is not mandated (ie, CLABSI for question 1). Analyses were performed using χ2 tests. No relationships were statistically significant at α =.05. CLABSI, central line–associated bloodstream infection; VAP, ventilator-associated pneumonia.

Number of respondents in states with plans for or current mandated reporting of that particular HAI.