Original Article
Changes in vascular accesses and in incidence rates of dialysis-related bloodstream infections in Québec, Canada, 2011–2017
- Élise Fortin, Manale Ouakki, Claude Tremblay, Jasmin Villeneuve, Simon Desmeules, Natasha Parisien, Danielle Moisan, Charles Frenette, for SPIN-BAC-HD
-
- Published online by Cambridge University Press:
- 08 April 2019, pp. 627-631
-
- Article
- Export citation
-
Objective:
Surveillance of dialysis-related bloodstream infections (DRBSIs) has been mandatory in Québec since April 2011. The aim of this study was to describe the epidemiology of DRBSIs in Québec.
Methods:Cohort study of prevalent patients undergoing chronic dialysis in the 36 facilities that participated without interruption in the provincial surveillance, between April 2011 and March 2017. Two indicators were analyzed: proportion of patient months dialyzed using a fistula (a patient month is a 28-day cycle during which an individual patient received dialysis) and incidence rate of DRBSI. Binomial and Poisson regression with generalized estimating equations were used to describe the evolution of indicators over time and to quantify the association between facilities’ proportion of fistulas and their incidence rate.
Results:Globally, 42.6% of all patient months were dialyzed using a fistula, but there was a statistically significant decrease over time (46.2% in 2011–2012 to 39.3% in 2016–2017). Despite this decline in the use of fistulas, rates of DRBSIs have also decreased, going from 0.38 DRBSIs per 100 patient months in 2011–2012 to 0.23 DRBSIs per 100 patient months in 2016–2017. No association was found between facility use of fistulas and the rate of DRBSI. At the individual level, however, the DRBSI rate was 4.12 times higher for patients using a catheter.
Conclusions:In Québec, the rate of DRBSIs has decreased over a 6-year period despite an increasing proportion of patients dialyzed by catheter.
Healthcare provider diagnostic testing practices for identification of Clostridioides (Clostridium) difficile in children: an Emerging Infections Network survey
- Larry K. Kociolek, Preeta K. Kutty, Philip M. Polgreen, Susan E. Beekmann
-
- Published online by Cambridge University Press:
- 15 February 2019, pp. 276-280
-
- Article
- Export citation
-
Objective:
To characterize healthcare provider diagnostic testing practices for identifying Clostridioides (Clostridium) difficile infection (CDI) and asymptomatic carriage in children.
Design:Electronic survey.
Methods:An 11-question survey was sent by e-mail or facsimile to all pediatric infectious diseases (PID) members of the Infectious Diseases Society of America’s Emerging Infections Network (EIN).
Results:Among 345 eligible respondents who had ever responded to an EIN survey, 196 (57%) responded; 162 of these (83%) were aware of their institutional policies for CDI testing and management. Also, 159 (98%) respondents knew their institution’s C. difficile testing method: 99 (62%) utilize NAAT without toxin testing and 60 (38%) utilize toxin testing, either as a single test or a multistep algorithm. Of 153 respondents, 10 (7%) reported that formed stools were tested for C. difficile at their institution, and 76 of 151 (50%) reported that their institution does not restrict C. difficile testing in infants and young children. The frequency of symptom- and age-based testing restrictions did not vary between institutions utilizing NAAT alone compared to those utilizing toxin testing for C. difficile diagnosis. Of 143 respondents, 26 (16%) permit testing of neonatal intensive care unit patients and 12 of 26 (46%) treat CDI with antibiotics in this patient population.
Conclusions:These data suggest that there are opportunities to improve CDI diagnostic stewardship practices in children, including among hospitals using NAATs alone for CDI diagnosis in children.
Clinical impact of an antimicrobial stewardship program on high-risk pediatric patients
- Jennifer L. Goldman, Jason G. Newland, Michael Price, Diana Yu, Brian R. Lee
-
- Published online by Cambridge University Press:
- 17 July 2019, pp. 968-973
-
- Article
- Export citation
-
Objective:
To evaluate the clinical impact of an antimicrobial stewardship program (ASP) on high-risk pediatric patients.
Design:Retrospective cohort study.
Setting:Free-standing pediatric hospital.
Patients:This study included patients who received an ASP review between March 3, 2008, and March 2, 2017, and were considered high-risk, including patients receiving care by the neonatal intensive care (NICU), hematology/oncology (H/O), or pediatric intensive care (PICU) medical teams.
Methods:The ASP recommendations included stopping antibiotics; modifying antibiotic type, dose, or duration; or obtaining an infectious diseases consultation. The outcomes evaluated in all high-risk patients with ASP recommendations were (1) hospital-acquired Clostridium difficile infection, (2) mortality, and (3) 30-day readmission. Subanalyses were conducted to evaluate hospital length of stay (LOS) and tracheitis treatment failure. Multivariable generalized linear models were performed to examine the relationship between ASP recommendations and each outcome after adjusting for clinical service and indication for treatment.
Results:The ASP made 2,088 recommendations, and 50% of these recommendations were to stop antibiotics. Recommendation agreement occurred in 70% of these cases. Agreement with an ASP recommendation was not associated with higher odds of mortality or hospital readmission. Patients with a single ASP review and agreed upon recommendation had a shorter median LOS (10.2 days vs 13.2 days; P < .05). The ASP recommendations were not associated with high rates of tracheitis treatment failure.
Conclusions:ASP recommendations do not result in worse clinical outcomes among high-risk pediatric patients. Most ASP recommendations are to stop or to narrow antimicrobial therapy. Further work is needed to enhance stewardship efforts in high-risk pediatric patients.
Total duration of antimicrobial therapy resulting from inpatient hospitalization
- April P. Dyer, Elizabeth Dodds Ashley, Deverick J. Anderson, Christina Sarubbi, Rebekah Wrenn, Lauri A. Hicks, Arjun Srinivasan, Rebekah W. Moehring
-
- Published online by Cambridge University Press:
- 28 May 2019, pp. 847-854
-
- Article
-
- You have access Access
- HTML
- Export citation
-
Objective:
To assess the feasibility of electronic data capture of postdischarge durations and evaluate total durations of antimicrobial exposure related to inpatient hospital stays.
Design:Multicenter, retrospective cohort study.
Setting:Two community hospitals and 1 academic medical center.
Patients:Hospitalized patients who received ≥1 dose of a systemic antimicrobial agent.
Methods:We collected and reviewed electronic data on inpatient and discharge antimicrobial prescribing from April to September 2016 in 3 pilot hospitals. Inpatient antimicrobial use was obtained from electronic medication administration records. Postdischarge antimicrobial use was calculated from electronic discharge prescriptions. We completed a manual validation to evaluate the ability of electronic prescriptions to capture intended postdischarge antibiotics. Inpatient, postdischarge, and total lengths of therapy (LOT) per admission were calculated to assess durations of antimicrobial therapy attributed to hospitalization.
Results:A total of 45,693 inpatient admissions were evaluated. Antimicrobials were given during 23,447 admissions (51%), and electronic discharge prescriptions were captured in 7,442 admissions (16%). Manual validation revealed incomplete data capture in scenarios in which prescribers avoided the electronic system. The postdischarge LOT among admissions with discharge antimicrobials was median 8 days (range, 1–360) with peaks at 5, 7, 10, and 14 days. Postdischarge days accounted for 38% of antimicrobial exposure days.
Conclusion:Discharge antimicrobial therapy accounted for a large portion of antimicrobial exposure related to inpatient hospital stays. Discharge prescription data can feasibly be captured through electronic prescribing records and may aid in designing stewardship interventions at transitions of care.
Pulmonary artery catheter epidemiology of risk in pre–heart-transplant recipients
- Zachary A Yetmar, Brian Lahr, John O’Horo, Atta Behfar, Priya Sampathkumar, Elena Beam
-
- Published online by Cambridge University Press:
- 30 April 2019, pp. 632-638
-
- Article
- Export citation
-
Objective:
Central-line–associated bloodstream infections (CLABSIs) are a known complication of central venous access. Pulmonary artery catheters (PAC) are frequently used in pre–heart-transplant patients, but the rate of CLABSI in this population is unknown. We sought to estimate the rate of CLABSI and identify factors associated with development of infection in patients actively listed for heart transplantation with a PAC.
Design:Retrospective cohort study.
Setting:This study was conducted in 3 intensive care units at an academic tertiary-care center in Minnesota.
Patients:61 pre–heart-transplant patients in an intensive care unit with a PAC in place from January 2013 to December 2016, totaling 219 PACs.
Methods:At-risk patients, pertinent risk factors, and demographic data were obtained using Mayo Clinic’s Unified Data Platform. CLABSIs were identified through internal infection prevention and control data. Characteristics of PAC use and infection rate were collected and analyzed using Kaplan-Meier estimates and time-dependent Cox models.
Results:Among pre–heart-transplant patients with a PAC, there were 14 CLABSIs, for an infection rate of 5.46 of 1,000 PAC days (95% confidence interval [CI], 2.98–9.15). The most common causative organism was coagulase-negative Staphylococcus (79%). In unadjusted analyses, CLABSI was associated with shorter time to transplant (hazard ratio [HR], 2.49; P = .027), but not mortality (HR, 1.79; P = .355).
Conclusions:The rate of CLABSI with PAC is high. Prolonged PAC use in the pre–heart-transplant population should be revisited.
A systematic review of central-line–associated bloodstream infection (CLABSI) diagnostic reliability and error
- Emily N. Larsen, Nicole Gavin, Nicole Marsh, Claire M. Rickard, Naomi Runnegar, Joan Webster
-
- Published online by Cambridge University Press:
- 31 July 2019, pp. 1100-1106
-
- Article
- Export citation
-
Objective:
To establish the reliability of the application of National Health and Safety Network (NHSN) central-line–associated bloodstream infection (CLABSI) criteria within established reporting systems internationally.
Design:Diagnostic-test accuracy systematic review.
Methods:We conducted a search of Medline, SCOPUS, the Cochrane Library, CINAHL (EbscoHost), and PubMed (NCBI). Cohort studies were eligible for inclusion if they compared publicly reported CLABSI rates and were conducted by independent and expertly trained reviewers using NHSN/Centers for Disease Control (or equivalent) criteria. Two independent reviewers screened, extracted data, and assessed risk of bias using the QUADAS 2 tool. Sensitivity, specificity, negative and positive predictive values were analyzed.
Results:A systematic search identified 1,259 publications; 9 studies were eligible for inclusion (n = 7,160 central lines). Publicly reported CLABSI rates were more likely to be underestimated (7 studies) than overestimated (2 studies). Specificity ranged from 0.70 (95% confidence interval [CI], 0.58–0.81) to 0.99 (95% CI, 0.99–1.00) and sensitivity ranged from 0.42 (95% CI, 0.15–0.72) to 0.88 (95% CI, 0.77–0.95). Four studies, which included a consecutive series of patients (whole cohort), reported CLABSI incidence between 9.8% and 20.9%, and absolute CLABSI rates were underestimated by 3.3%–4.4%. The risk of bias was low to moderate in most included studies.
Conclusions:Our findings suggest consistent underestimation of true CLABSI incidence within publicly reported rates, weakening the validity and reliability of surveillance measures. Auditing, education, and adequate resource allocation is necessary to ensure that surveillance data are accurate and suitable for benchmarking and quality improvement measures over time.
Registration:Prospectively registered with International prospective register of systematic reviews (PROSPERO ID CRD42015021989; June 7, 2015). https://www.crd.york.ac.uk/PROSPERO/display_record.php?ID%3dCRD42015021989
Role of rapid diagnostics for viral respiratory infections in antibiotic prescribing decision in the emergency department
- Jing Li, S. Lena Kang-Birken, Samantha K. Mathews, Catelynn E. Kenner, Lynn N. Fitzgibbons
-
- Published online by Cambridge University Press:
- 28 June 2019, pp. 974-978
-
- Article
- Export citation
-
Objective:
To describe the frequency of antibiotic prescriptions in patients with known viral respiratory infections (VRIs) diagnosed by polymerase chain reaction (PCR) in 3 emergency departments (EDs) and to identify patient characteristics that influence the prescribing of antibiotics by ED physicians despite PCR confirmation of viral cause.
Design:Retrospective, observational analysis of patients with PCR-diagnosed VRI discharged from 3 acute-care hospital EDs within 1 health system.
Results:In total, 323 patients were discharged from the ED with a VRI diagnosis, of whom 68 were prescribed antibiotics (21.1%). These patients were older (median, 59.5 vs 43 years; P = .04), experienced symptoms longer (median, 4 vs 2 days; P = .002), were more likely to have received antibiotics in the preceding 7 days (27.9% vs 9.8%; P < .001), and had higher proportions of abnormal chest X-rays (64.5% vs 28.4%; P < .001). Patients were more likely to receive antibiotics for a diagnosis of pneumonia (39.7% vs 1.6%; P < .001) or otitis media (7.4% vs 0.4%; P = .002), and were less likely with diagnosis of upper respiratory infection (2.9% vs 13.7%; P = .02) or influenza (20.6% vs 44.3%; P < .001).
Conclusions:Despite a diagnosis of VRI, one-fifth of ED patients were prescribed antibiotics. Patient characteristics including age, duration of symptoms, abnormal chest X-rays, and specific diagnosis may increase provider concern for concurrent bacterial infections. Opportunities exist for antimicrobial stewardship strategies to incorporate rapid diagnostics in promoting judicious antibiotic usage in the ED.
Hospital-level high-risk antibiotic use in relation to hospital-associated Clostridioides difficile infections: Retrospective analysis of 2016–2017 data from US hospitals
- Ying P. Tabak, Arjun Srinivasan, Kalvin C. Yu, Stephen G. Kurtz, Vikas Gupta, Steven Gelone, Patrick J. Scoble, L. Clifford McDonald
-
- Published online by Cambridge University Press:
- 16 September 2019, pp. 1229-1235
-
- Article
- Export citation
-
Objective:
Antibiotics are widely used by all specialties in the hospital setting. We evaluated previously defined high-risk antibiotic use in relation to Clostridioides difficile infections (CDIs).
Methods:We analyzed 2016–2017 data from 171 hospitals. High-risk antibiotics included second-, third-, and fourth-generation cephalosporins, fluoroquinolones, carbapenems, and lincosamides. A CDI case was a positive stool C. difficile toxin or molecular assay result from a patient without a positive result in the previous 8 weeks. Hospital-associated (HA) CDI cases included specimens collected >3 calendar days after admission or ≤3 calendar days from a patient with a prior same-hospital discharge within 28 days. We used the multivariable Poisson regression model to estimate the relative risk (RR) of high-risk antibiotic use on HA CDI, controlling for confounders.
Results:The median days of therapy for high-risk antibiotic use was 241.2 (interquartile range [IQR], 192.6–295.2) per 1,000 days present; the overall HA CDI rate was 33 (IQR, 24–43) per 10,000 admissions. The overall correlation of high-risk antibiotic use and HA CDI was 0.22 (P = .003), and higher correlation was observed in teaching hospitals (0.38; P = .002). For every 100-day (per 1,000 days present) increase in high-risk antibiotic therapy, there was a 12% increase in HA CDI (RR, 1.12; 95% CI, 1.04–1.21; P = .002) after adjusting for confounders.
Conclusions:High-risk antibiotic use is an independent predictor of HA CDI. This assessment of poststewardship implementation in the United States highlights the importance of tracking trends of antimicrobial use over time as it relates to CDI.
Effect of changing urine testing orderables and clinician order sets on inpatient urine culture testing: Analysis from a large academic medical center
- Satish Munigala, Rebecca Rojek, Helen Wood, Melanie L. Yarbrough, Ronald R. Jackups, Jr, Carey-Ann D. Burnham, David K. Warren
-
- Published online by Cambridge University Press:
- 21 February 2019, pp. 281-286
-
- Article
- Export citation
-
Objective:
To evaluate the impact of changes to urine testing orderables in computerized physician order entry (CPOE) system on urine culturing practices.
Design:Retrospective before-and-after study.
Setting:A 1,250-bed academic tertiary-care referral center.
Patients:Hospitalized adults who had ≥1 urine culture performed during their stay.
Intervention:The intervention (implemented in April 2017) consisted of notifications to providers, changes to order sets, and inclusion of the new urine culture reflex tests in commonly used order sets. We compared the urine culture rates before the intervention (January 2015 to April 2016) and after the intervention (May 2016 to August 2017), adjusting for temporal trends.
Results:During the study period, 18,954 inpatients (median age, 62 years; 68.8% white and 52.3% female) had 24,569 urine cultures ordered. Overall, 6,662 urine cultures (27%) were positive. The urine culturing rate decreased significantly in the postintervention period for any specimen type (38.1 per 1,000 patient days preintervention vs 20.9 per 1,000 patient days postintervention; P < .001), clean catch (30.0 vs 18.7; P < .001) and catheterized urine (7.8 vs 1.9; P < .001). Using an interrupted time series model, urine culture rates decreased for all specimen types (P < .05).
Conclusions:Our intervention of changes to order sets and inclusion of the new urine culture reflex tests resulted in a 45% reduction in the urine cultures ordered. CPOE system format plays a vital role in reducing the burden of unnecessary urine cultures and should be implemented in combination with other efforts.
Real-world effectiveness of infection prevention interventions for reducing procedure-related cardiac device infections: Insights from the veterans affairs clinical assessment reporting and tracking program
- Archana Asundi, Maggie Stanislawski, Payal Mehta, Anna E. Baron, Hillary J. Mull, P. Michael Ho, Peter J. Zimetbaum, Kalpana Gupta, Westyn Branch-Elliman
-
- Published online by Cambridge University Press:
- 04 June 2019, pp. 855-862
-
- Article
- Export citation
-
Objective:
To measure the association between receipt of specific infection prevention interventions and procedure-related cardiac implantable electronic device (CIED) infections.
Design:Retrospective cohort with manually reviewed infection status.
Setting:Setting: National, multicenter Veterans Health Administration (VA) cohort.
Participants:Sampling of procedures entered into the VA Clinical Assessment Reporting and Tracking-Electrophysiology (CART-EP) database from fiscal years 2008 through 2015.
Methods:A sample of procedures entered into the CART-EP database underwent manual review for occurrence of CIED infection and other clinical/procedural variables. The primary outcome was 6-month incidence of CIED infection. Measures of association were calculated using multivariable generalized estimating equations logistic regression.
Results:We identified 101 procedure-related CIED infections among 2,098 procedures (4.8% of reviewed sample). Factors associated with increased odds of infections included (1) wound complications (adjusted odds ratio [aOR], 8.74; 95% confidence interval [CI], 3.16–24.20), (2) revisions including generator changes (aOR, 2.4; 95% CI, 1.59–3.63), (3) an elevated international normalized ratio (INR) >1.5 (aOR, 1.56; 95% CI, 1.12–2.18), and (4) methicillin-resistant Staphylococcus colonization (aOR, 9.56; 95% CI, 1.55–27.77). Clinically effective prevention interventions included preprocedural skin cleaning with chlorhexidine versus other topical agents (aOR, 0.41; 95% CI, 0.22–0.76) and receipt of β-lactam antimicrobial prophylaxis versus vancomycin (aOR, 0.60; 95% CI, 0.37–0.96). The use of mesh pockets and continuation of antimicrobial prophylaxis after skin closure were not associated with reduced infection risk.
Conclusions:These findings regarding the real-world clinical effectiveness of different prevention strategies can be applied to the development of evidence-based protocols and infection prevention guidelines specific to the electrophysiology laboratory.
Commentary
Minding the gap: Rethinking implementation of antimicrobial stewardship in India
- Payal K. Patel
-
- Published online by Cambridge University Press:
- 14 May 2019, pp. 520-521
-
- Article
- Export citation
Original Article
Association between universal gloving and healthcare-associated infections: A systematic literature review and meta-analysis
- Part of:
- Nai-Chung N. Chang, Ashley E. Kates, Melissa A. Ward, Elizabeth J. Kiscaden, Heather Schacht Reisinger, Eli N. Perencevich, Marin L. Schweizer, for the CDC Prevention Epicenters Program
-
- Published online by Cambridge University Press:
- 17 May 2019, pp. 755-760
-
- Article
-
- You have access Access
- HTML
- Export citation
-
Objective:
Healthcare-associated infections (HAIs) are a significant burden on healthcare facilities. Universal gloving is a horizontal intervention to prevent transmission of pathogens that cause HAI. In this meta-analysis, we aimed to identify whether implementation of universal gloving is associated with decreased incidence of HAI in clinical settings.
Methods:A systematic literature search was conducted to find all relevant publications using search terms for universal gloving and HAIs. Pooled incidence rate ratios (IRRs) and 95% confidence intervals (CIs) were calculated using random effects models. Heterogeneity was evaluated using the Woolf test and the I2 test.
Results:In total, 8 studies were included. These studies were moderately to substantially heterogeneous (I2 = 59%) and had varied results. Stratified analyses showed a nonsignificant association between universal gloving and incidence of methicillin-resistant Staphylococcus aureus (MRSA; pooled IRR, 0.94; 95% CI, 0.79–1.11) and vancomycin-resistant enterococci (VRE; pooled IRR, 0.94; 95% CI, 0.69–1.28). Studies that implemented universal gloving alone showed a significant association with decreased incidence of HAI (IRR, 0.77; 95% CI, 0.67–0.89), but studies implementing universal gloving as part of intervention bundles showed no significant association with incidence of HAI (IRR, 0.95; 95% CI, 0.86–1.05).
Conclusions:Universal gloving may be associated with a small protective effect against HAI. Despite limited data, universal gloving may be considered in high-risk settings, such as pediatric intensive care units. Further research should be performed to determine the effects of universal gloving on a broader range of pathogens, including gram-negative pathogens.
A recipe for antimicrobial stewardship success: Using intervention mapping to develop a program to reduce antibiotic overuse in long-term care
- Andrea Chambers, Sam MacFarlane, Rosemary Zvonar, Gerald Evans, Julia E. Moore, Bradley J. Langford, Anne Augustin, Sue Cooper, Jacquelyn Quirk, Liz McCreight, Gary Garber
-
- Published online by Cambridge University Press:
- 05 November 2018, pp. 24-31
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Objective
To better understand barriers and facilitators that contribute to antibiotic overuse in long-term care and to use this information to inform an evidence and theory-informed program.
MethodsInformation on barriers and facilitators associated with the assessment and management of urinary tract infections were identified from a mixed-methods survey and from focus groups with stakeholders working in long-term care. Each barrier or facilitator was mapped to corresponding determinants of behavior change, as described by the theoretical domains framework (TDF). The Rx for Change database was used to identify strategies to address the key determinants of behavior change.
ResultsIn total, 19 distinct barriers and facilitators were mapped to 8 domains from the TDF: knowledge, skills, environmental context and resources, professional role or identity, beliefs about consequences, social influences, emotions, and reinforcements. The assessment of barriers and facilitators informed the need for a multifaceted approach with the inclusion of strategies (1) to establish buy-in for the changes; (2) to align organizational policies and procedures; (3) to provide education and ongoing coaching support to staff; (4) to provide information and education to residents and families; (5) to establish process surveillance with feedback to staff; and (6) to deliver reminders.
ConclusionsThe use of a stepped approach was valuable to ensure that locally relevant barriers and facilitators to practice change were addressed in the development of a regional program to help long-term care facilities minimize antibiotic prescribing for asymptomatic bacteriuria. This stepped approach provides considerable opportunity to advance the design and impact of antimicrobial stewardship programs.
A methodological comparison of risk scores versus decision trees for predicting drug-resistant infections: A case study using extended-spectrum beta-lactamase (ESBL) bacteremia
- Katherine E. Goodman, Justin Lessler, Anthony D. Harris, Aaron M. Milstone, Pranita D. Tamma
-
- Published online by Cambridge University Press:
- 04 March 2019, pp. 400-407
-
- Article
- Export citation
-
Background:
Timely identification of multidrug-resistant gram-negative infections remains an epidemiological challenge. Statistical models for predicting drug resistance can offer utility where rapid diagnostics are unavailable or resource-impractical. Logistic regression–derived risk scores are common in the healthcare epidemiology literature. Machine learning–derived decision trees are an alternative approach for developing decision support tools. Our group previously reported on a decision tree for predicting ESBL bloodstream infections. Our objective in the current study was to develop a risk score from the same ESBL dataset to compare these 2 methods and to offer general guiding principles for using each approach.
Methods:Using a dataset of 1,288 patients with Escherichia coli or Klebsiella spp bacteremia, we generated a risk score to predict the likelihood that a bacteremic patient was infected with an ESBL-producer. We evaluated discrimination (original and cross-validated models) using receiver operating characteristic curves and C statistics. We compared risk score and decision tree performance, and we reviewed their practical and methodological attributes.
Results:In total, 194 patients (15%) were infected with ESBL-producing bacteremia. The clinical risk score included 14 variables, compared to the 5 decision-tree variables. The positive and negative predictive values of the risk score and decision tree were similar (>90%), but the C statistic of the risk score (0.87) was 10% higher.
Conclusions:A decision tree and risk score performed similarly for predicting ESBL infection. The decision tree was more user-friendly, with fewer variables for the end user, whereas the risk score offered higher discrimination and greater flexibility for adjusting sensitivity and specificity.
High versus low intensity: What is the optimal approach to prospective audit and feedback in an antimicrobial stewardship program?
- Bradley J. Langford, Kevin A. Brown, April J. Chan, Mark Downing
-
- Published online by Cambridge University Press:
- 22 October 2019, pp. 1344-1347
-
- Article
- Export citation
-
Background:
Antimicrobial stewardship program (ASP) interventions, such as prospective audit and feedback (PAF), have been shown to reduce antimicrobial use and improve patient outcomes. However, the optimal approach to PAF is unknown.
Objective:We examined the impact of a high–intensity interdisciplinary rounds–based PAF compared to low–intensity PAF on antimicrobial use on internal medicine wards in a 400–bed community hospital.
Methods:Prior to the intervention, ASP pharmacists performed low–intensity PAF with a focus on targeted antibiotics. Recommendations were made directly to the internist for each patient. High–intensity, rounds–based PAF was then introduced sequentially to 5 internal medicine wards. This PAF format included twice–weekly interdisciplinary rounds, with a review of all internal medicine patients receiving any antimicrobial agent. Antibiotic use and clinical outcomes were measured before and after the transition to high–intensity PAF. An interrupted time–series analysis was performed adjusting for seasonal and secular trends.
Results:With the transition from low–intensity to high–intensity PAF, a reduction in overall usage was seen from 483 defined daily doses (DDD)/1,000 patient days (PD) during the low–intensity phase to 442 DDD/1,000 PD in the high–intensity phase (difference, −42; 95% confidence interval [CI], −74 to −9). The reduction in usage was more pronounced in the adjusted analysis, in the latter half of the high intensity period, and for targeted agents. There were no differences seen in clinical outcomes in the adjusted analysis.
Conclusions:High–intensity PAF was associated with a reduction in antibiotic use compared to a low–intensity approach without any adverse impact on patient outcomes. A decision to implement high–intensity PAF approach should be weighed against the increased workload required.
Cefazolin as surgical antimicrobial prophylaxis in hysterectomy: A systematic review and meta-analysis of randomized controlled trials
- Aurora Pop-Vicas, Stephen Johnson, Nasia Safdar
-
- Published online by Cambridge University Press:
- 05 December 2018, pp. 142-149
-
- Article
- Export citation
-
Objective
Current practice guidelines recommend cefazolin, cefoxitin, cefotetan, or ampicillin-sulbactam as first-line antibiotic prophylaxis in hysterectomy. We undertook this systematic review and meta-analysis of randomized controlled trials (RCTs) to determine whether cefazolin, with limited antianaerobic spectrum, is as effective in preventing surgical site-infection (SSI) as the other first-choice antimicrobials that have more extensive antianaerobic activity.
MethodsWe searched PubMed, Scopus, Web of Science, Cochrane Central, and EMBASE for relevant randomized controlled trials (RCT) in any language up to January 23, 2018. We only included trials that measured SSI (our primary outcome) defined as superficial, deep, or organ space. We excluded trials of β-lactams no longer in clinical use.
ResultsIn terms of SSI incidence, cefazolin use was not inferior to its comparator in 12 of 13 individual RCTs included in the analysis. The meta-analysis summary estimate showed a significantly higher SSI risk with cefazolin versus cefoxitin or cefotetan (risk ratio, 1.7; 95% CI, 1.04–2.77; P = .03). However, most studies included nonstandardized dosing and duration of antimicrobial prophylaxis, had indeterminate or high risk of bias, did not include patients with gynecological malignancies, and/or were older RCTs not reflective of current clinical practices.
ConclusionDue to inherent limitations associated with old RCTs with limited relevance to contemporary surgery, an RCT of cefazolin versus regimens with significant antianaerobic spectrum is needed to establish the optimal choice for SSI prevention in hysterectomy.
The case for a population standardized infection ratio (SIR): A metric that marries the device SIR to the standardized utilization ratio (SUR)
- Mohamad G. Fakih, Ren-Huai Huang, Angelo Bufalino, Thomas Erlinger, Lisa Sturm, Ann Hendrich, Ziad Haydar
-
- Published online by Cambridge University Press:
- 24 June 2019, pp. 979-982
-
- Article
- Export citation
-
Background:
The device standardized infection ratio (SIR) is used to compare unit and hospital performance for different publicly reported infections. Interventions to reduce unnecessary device use may select a higher-risk population, leading to a paradoxical increase in SIR for some high-performing facilities. The standardized utilization ratio (SUR) adjusts for device use for different units and facilities.
Methods:We calculated the device SIR (calculated based on actual device days) and population SIR (defined as Σ observed events divided by Σ predicted events based on predicted device days), adjusting for the facility SUR for both central-line–associated bloodstream infections (CLABSIs) and catheter-associated urinary tract infections (CAUTIs) in 84 hospitals from a single system for calendar years 2016 and 2017.
Results:The central-line SUR was 1.02 for 801,172 central-line days, with a device SIR of 0.76 and a population SIR of 0.78, a 1.6% relative increase. On the other hand, the urinary catheter SUR was 0.90 for 757,504 urinary catheter days, with a device SIR of 0.84 and a population SIR of 0.76, a 10.0% relative decrease. The cumulative attributable difference for CAUTI to a target SIR of 1 was −135.4 for the device SIR compared to −203.66 for the population SIR, a 50.8% increase in prevented events.
Conclusion:Population SIR accounts for predicted device utilization; thus, it is an attractive metric with which to address overall risk of infection or harm to a patient population. It also reduces the risk of selection bias that may impact the device SIR with interventions to reduce device use.
Surgical site infection risk following cesarean deliveries covered by Medicaid or private insurance
- Sarah H. Yi, Kiran M. Perkins, Sophia V. Kazakova, Kelly M. Hatfield, David G. Kleinbaum, James Baggs, Rachel B. Slayton, John A. Jernigan
-
- Published online by Cambridge University Press:
- 09 April 2019, pp. 639-648
-
- Article
- Export citation
-
Objective:
To compare risk of surgical site infection (SSI) following cesarean delivery between women covered by Medicaid and private health insurance.
Study design:Retrospective cohort.
Study population:Cesarean deliveries covered by Medicaid or private insurance and reported to the National Healthcare Safety Network (NHSN) and state inpatient discharge databases by hospitals in California (2011–2013).
Methods:Deliveries reported to NHSN and state inpatient discharge databases were linked to identify SSIs in the 30 days following cesarean delivery, primary payer, and patient and procedure characteristics. Additional hospital-level characteristics were obtained from public databases. Relative risk of SSI by primary payer primary payer was assessed using multivariable logistic regression adjusting for patient, procedure, and hospital characteristics, accounting for facility-level clustering.
Results:Of 291,757 cesarean deliveries included, 48% were covered by Medicaid. SSIs were detected following 1,055 deliveries covered by Medicaid (0.75%) and 955 deliveries covered by private insurance (0.63%) (unadjusted odds ratio, 1.2; 95% confidence interval [CI], 1.1–1.3; P < .0001). The adjusted odds of SSI following cesarean deliveries covered by Medicaid was 1.4 (95% CI, 1.2–1.6; P < .0001) times the odds of those covered by private insurance.
Conclusions:In this, the largest and only multicenter study to investigate SSI risk following cesarean delivery by primary payer, Medicaid-insured women had a higher risk of infection than privately insured women. These findings suggest the need to evaluate and better characterize the quality of maternal healthcare for and needs of women covered by Medicaid to inform targeted infection prevention and policy.
ESBL-colonization at ICU admission: impact on subsequent infection, carbapenem-consumption, and outcome
- Aurélien Emmanuel Martinez, Andreas Widmer, Reno Frei, Hans Pargger, Daniel Tuchscherer, Stephan Marsch, Adrian Egli, Sarah Tschudin-Sutter
-
- Published online by Cambridge University Press:
- 21 February 2019, pp. 408-413
-
- Article
- Export citation
-
Objective:
To determine whether colonization with extended-spectrum β-lactamase–producing Enterobacteriaceae (ESBL-PE) predicts the risk for subsequent infection and impacts carbapenem-consumption and outcome in intensive care unit (ICU) patients.
Design:Prospective cohort study.
Setting:The 2 ICUs in the University Hospital Basel in Switzerland.
Patients:All patients admitted to the 2 ICUs providing mechanical ventilation and an expected ICU stay >48 hours.
Methods:Patients were routinely screened for ESBL-PE carriage by rectal swab on admission. Competing risk regression analyses were applied to calculate hazard ratios (HRs) for infection with ESBL-PE and mortality. Length of hospital stay, length of ICU stay, and duration of carbapenem exposure were compared using the Mann-Whitney U test.
Results:Among 302 patients, 24 (8.0%) were colonized with ESBL-PE on ICU admission. Infections with ESBL-PE occurred in 4 patients, of whom 3 (75%) were identified as ESBL-PE colonized on admission. ESBL-PE colonization on admission was associated with subsequent ESBL-PE infection (hazard ratio [HR], 25.52; 95% confidence interval [CI], 2.40–271.41; P = .007) and exposure to carbapenems (HR, 2.42; 95% CI, 1.01–5.79; P = .047), whereas duration of carbapenem exposure did not differ in relation to ESBL-PE colonization (median, 7 days [IQR, 3–8 days] vs median, 6 days [IQR 3–9 days]; P = 0.983). Patients colonized with ESBL-PE were not at increased risk for death overall (HR, 1.00; 95% CI, 0.44–2.30; P = .993) or death attributable to infection (HR, 1.20; 95% CI, 0.28–5.11; P = .808).
Conclusions:Screening strategies for detection of ESBL-PE colonization on ICU admission may allow the identification of patients at highest risk for ESBL-PE infection and the correct allocation of empiric carbapenem treatment.
Variability in antimicrobial use in pediatric ventilator-associated events
- Manjiree V. Karandikar, Susan E. Coffin, Gregory P. Priebe, Thomas J. Sandora, Latania K. Logan, Gitte Y. Larsen, Philip Toltzis, James E. Gray, Michael Klompas, Julia S. Sammons, Marvin B. Harper, Kelly Horan, Matthew Lakoma, Noelle M. Cocoros, Grace M. Lee
-
- Published online by Cambridge University Press:
- 09 November 2018, pp. 32-39
-
- Article
- Export citation
-
Objective
To assess variability in antimicrobial use and associations with infection testing in pediatric ventilator-associated events (VAEs).
DesignDescriptive retrospective cohort with nested case-control study.
SettingPediatric intensive care units (PICUs), cardiac intensive care units (CICUs), and neonatal intensive care units (NICUs) in 6 US hospitals.
PatientsChildren≤18 years ventilated for≥1 calendar day.
MethodsWe identified patients with pediatric ventilator-associated conditions (VACs), pediatric VACs with antimicrobial use for≥4 days (AVACs), and possible ventilator-associated pneumonia (PVAP, defined as pediatric AVAC with a positive respiratory diagnostic test) according to previously proposed criteria.
ResultsAmong 9,025 ventilated children, we identified 192 VAC cases, 43 in CICUs, 70 in PICUs, and 79 in NICUs. AVAC criteria were met in 79 VAC cases (41%) (58% CICU; 51% PICU; and 23% NICU), and varied by hospital (CICU, 20–67%; PICU, 0–70%; and NICU, 0–43%). Type and duration of AVAC antimicrobials varied by ICU type. AVAC cases in CICUs and PICUs received broad-spectrum antimicrobials more often than those in NICUs. Among AVAC cases, 39% had respiratory infection diagnostic testing performed; PVAP was identified in 15 VAC cases. Also, among AVAC cases, 73% had no associated positive respiratory or nonrespiratory diagnostic test.
ConclusionsAntimicrobial use is common in pediatric VAC, with variability in spectrum and duration of antimicrobials within hospitals and across ICU types, while PVAP is uncommon. Prolonged antimicrobial use despite low rates of PVAP or positive laboratory testing for infection suggests that AVAC may provide a lever for antimicrobial stewardship programs to improve utilization.