We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Clostridioides difficile infection (CDI) may be misdiagnosed if testing is performed in the absence of signs or symptoms of disease. This study sought to support appropriate testing by estimating the impact of signs, symptoms, and healthcare exposures on pre-test likelihood of CDI.
Methods:
A panel of fifteen experts in infectious diseases participated in a modified UCLA/RAND Delphi study to estimate likelihood of CDI. Consensus, defined as agreement by >70% of panelists, was assessed via a REDCap survey. Items without consensus were discussed in a virtual meeting followed by a second survey.
Results:
All fifteen panelists completed both surveys (100% response rate). In the initial survey, consensus was present on 6 of 15 (40%) items related to risk of CDI. After panel discussion and clarification of questions, consensus (>70% agreement) was reached on all remaining items in the second survey. Antibiotics were identified as the primary risk factor for CDI and grouped into three categories: high-risk (likelihood ratio [LR] 7, 93% agreement among panelists in first survey), low-risk (LR 3, 87% agreement in first survey), and minimal-risk (LR 1, 71% agreement in first survey). Other major factors included new or unexplained severe diarrhea (e.g., ≥ 10 liquid bowel movements per day; LR 5, 100% agreement in second survey) and severe immunosuppression (LR 5, 87% agreement in second survey).
Conclusion:
Infectious disease experts concurred on the importance of signs, symptoms, and healthcare exposures for diagnosing CDI. The resulting risk estimates can be used by clinicians to optimize CDI testing and treatment.
We compared the number of blood-culture events before and after the introduction of a blood-culture algorithm and provider feedback. Secondary objectives were the comparison of blood-culture positivity and negative safety signals before and after the intervention.
Design:
Prospective cohort design.
Setting:
Two surgical intensive care units (ICUs): general and trauma surgery and cardiothoracic surgery
Patients:
Patients aged ≥18 years and admitted to the ICU at the time of the blood-culture event.
Methods:
We used an interrupted time series to compare rates of blood-culture events (ie, blood-culture events per 1,000 patient days) before and after the algorithm implementation with weekly provider feedback.
Results:
The blood-culture event rate decreased from 100 to 55 blood-culture events per 1,000 patient days in the general surgery and trauma ICU (72% reduction; incidence rate ratio [IRR], 0.38; 95% confidence interval [CI], 0.32–0.46; P < .01) and from 102 to 77 blood-culture events per 1,000 patient days in the cardiothoracic surgery ICU (55% reduction; IRR, 0.45; 95% CI, 0.39–0.52; P < .01). We did not observe any differences in average monthly antibiotic days of therapy, mortality, or readmissions between the pre- and postintervention periods.
Conclusions:
We implemented a blood-culture algorithm with data feedback in 2 surgical ICUs, and we observed significant decreases in the rates of blood-culture events without an increase in negative safety signals, including ICU length of stay, mortality, antibiotic use, or readmissions.
We assessed Oxivir Tb wipe disinfectant residue in a controlled laboratory setting to evaluate low environmental contamination of SARS-CoV-2. Frequency of viral RNA detection was not statistically different between intervention and control arms on day 3 (P=0.14). Environmental contamination viability is low; residual disinfectant did not significantly contribute to low contamination.
Urine cultures collected from catheterized patients have a high likelihood of false-positive results due to colonization. We examined the impact of a clinical decision support (CDS) tool that includes catheter information on test utilization and patient-level outcomes.
Methods:
This before-and-after intervention study was conducted at 3 hospitals in North Carolina. In March 2021, a CDS tool was incorporated into urine-culture order entry in the electronic health record, providing education about indications for culture and suggesting catheter removal or exchange prior to specimen collection for catheters present >7 days. We used an interrupted time-series analysis with Poisson regression to evaluate the impact of CDS implementation on utilization of urinalyses and urine cultures, antibiotic use, and other outcomes during the pre- and postintervention periods.
Results:
The CDS tool was prompted in 38,361 instances of urine cultures ordered in all patients, including 2,133 catheterized patients during the postintervention study period. There was significant decrease in urine culture orders (1.4% decrease per month; P < .001) and antibiotic use for UTI indications (2.3% decrease per month; P = .006), but there was no significant decline in CAUTI rates in the postintervention period. Clinicians opted for urinary catheter removal in 183 (8.5%) instances. Evaluation of the safety reporting system revealed no apparent increase in safety events related to catheter removal or reinsertion.
Conclusion:
CDS tools can aid in optimizing urine culture collection practices and can serve as a reminder for removal or exchange of long-term indwelling urinary catheters at the time of urine-culture collection.
Clinicians and laboratories routinely use urinalysis (UA) parameters to determine whether antimicrobial treatment and/or urine cultures are needed. Yet the performance of individual UA parameters and common thresholds for action are not well defined and may vary across different patient populations.
Methods:
In this retrospective cohort study, we included all encounters with UAs ordered 24 hours prior to a urine culture between 2015 and 2020 at 3 North Carolina hospitals. We evaluated the performance of relevant UA parameters as potential outcome predictors, including sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV). We also combined 18 different UA criteria and used receiver operating curves to identify the 5 best-performing models for predicting significant bacteriuria (≥100,000 colony-forming units of bacteria/mL).
Results:
In 221,933 encounters during the 6-year study period, no single UA parameter had both high sensitivity and high specificity in predicting bacteriuria. Absence of leukocyte esterase and pyuria had a high NPV for significant bacteriuria. Combined UA parameters did not perform better than pyuria alone with regard to NPV. The high NPV ≥0.90 of pyuria was maintained among most patient subgroups except females aged ≥65 years and patients with indwelling catheters.
Conclusion:
When used as a part of a diagnostic workup, UA parameters should be leveraged for their NPV instead of sensitivity. Because many laboratories and hospitals use reflex urine culture algorithms, their workflow should include clinical decision support and or education to target symptomatic patients and focus on populations where absence of pyuria has high NPV.
To evaluate the pattern of blood-culture utilization among a cohort of 6 hospitals to identify potential opportunities for diagnostic stewardship.
Methods:
We completed a retrospective analysis of blood-culture utilization during adult inpatient or emergency department (ED) encounters in 6 hospitals from May 2019 to April 2020. We investigated 2 measures of blood-culture utilization rates (BCURs): the total number of blood cultures, defined as a unique accession number per 1,000 patient days (BCX) and a new metric of blood-culture events per 1,000 patient days to account for paired culture practices. We defined a blood-culture event as an initial blood culture and all subsequent samples for culture drawn within 12 hours for patients with an inpatient or ED encounter. Cultures were evaluated by unit type, positivity and contamination rates, and other markers evaluating the quality of blood-culture collection.
Results:
In total, 111,520 blood cultures, 52,550 blood culture events, 165,456 inpatient admissions, and 568,928 patient days were analyzed. Overall, the mean BCUR was 196 blood cultures per 1,000 patient days, with 92 blood culture events per 1,000 patient days (range, 64–155 among hospitals). Furthermore, 7% of blood-culture events were single culture events, 55% began in the ED, and 77% occurred in the first 3 hospital days. Among all blood cultures, 7.7% grew a likely pathogen, 2.1% were contaminated, and 5.9% of first blood cultures were collected after the initiation of antibiotics.
Conclusions:
Blood-culture utilization varied by hospital and was heavily influenced by ED culture volumes. Hospital comparisons of blood-culture metrics can assist in identifying opportunities to optimize blood-culture collection practices.
The extensive use of the urinalysis for screening and monitoring in diverse clinical settings usually identifies abnormal urinalysis parameters in patients with no suspicion of urinary tract infection, which in turn triggers urine cultures, inappropriate antimicrobial use, and associated harms like Clostridioides difficile infection. We highlight how urinalysis is misused, and suggest deconstructing it to better align with evolving patterns of clinical use and the differential diagnosis being targeted. Reclassifying the urinalysis components into infectious and non-infectious panels and interpreting urinalysis results in the context of individual patient’s pretest probability of disease is a novel approach to promote proper urine testing and antimicrobial stewardship, and achieve better outcomes.
To determine the impact of electronic health record (EHR)–based interventions and test restriction on Clostridioides difficile tests (CDTs) and hospital-onset C. difficile infection (HO-CDI).
Design:
Quasi-experimental study in 3 hospitals.
Setting:
957-bed academic (hospital A), 354-bed (hospital B), and 175-bed (hospital C) academic-affiliated community hospitals.
Interventions:
Three EHR-based interventions were sequentially implemented: (1) alert when ordering a CDT if laxatives administered within 24 hours (January 2018); (2) cancellation of CDT orders after 24 hours (October 2018); (3) contextual rule-driven order questions requiring justification when laxative administered or lack of EHR documentation of diarrhea (July 2019). In February 2019, hospital C implemented a gatekeeper intervention requiring approval for all CDTs after hospital day 3. The impact of the interventions on C. difficile testing and HO-CDI rates was estimated using an interrupted time-series analysis.
Results:
C. difficile testing was already declining in the preintervention period (annual change in incidence rate [IR], 0.79; 95% CI, 0.72–0.87) and did not decrease further with the EHR interventions. The laxative alert was temporally associated with a trend reduction in HO-CDI (annual change in IR from baseline, 0.85; 95% CI, 0.75–0.96) at hospitals A and B. The gatekeeper intervention at hospital C was associated with level (IRR, 0.50; 95% CI, 0.42-0.60) and trend reductions in C. difficile testing (annual change in IR, 0.91; 95% CI, 0.85–0.98) and level (IRR 0.42; 95% CI, 0.22–0.81) and trend reductions in HO-CDI (annual change in IR, 0.68; 95% CI, 0.50–0.92) relative to the baseline period.
Conclusions:
Test restriction was more effective than EHR-based clinical decision support to reduce C. difficile testing in our 3-hospital system.
We implemented universal severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) testing of patients undergoing surgical procedures as a means to conserve personal protective equipment (PPE). The rate of asymptomatic coronavirus disease 2019 (COVID-19) was <0.5%, which suggests that early local public health interventions were successful. Although our protocol was resource intensive, it prevented exposures to healthcare team members.
To evaluate the National Health Safety Network (NHSN) hospital-onset Clostridioides difficile infection (HO-CDI) standardized infection ratio (SIR) risk adjustment for general acute-care hospitals with large numbers of intensive care unit (ICU), oncology unit, and hematopoietic cell transplant (HCT) patients.
Design:
Retrospective cohort study.
Setting:
Eight tertiary-care referral general hospitals in California.
Methods:
We used FY 2016 data and the published 2015 rebaseline NHSN HO-CDI SIR. We compared facility-wide inpatient HO-CDI events and SIRs, with and without ICU data, oncology and/or HCT unit data, and ICU bed adjustment.
Results:
For these hospitals, the median unmodified HO-CDI SIR was 1.24 (interquartile range [IQR], 1.15–1.34); 7 hospitals qualified for the highest ICU bed adjustment; 1 hospital received the second highest ICU bed adjustment; and all had oncology-HCT units with no additional adjustment per the NHSN. Removal of ICU data and the ICU bed adjustment decreased HO-CDI events (median, −25%; IQR, −20% to −29%) but increased the SIR at all hospitals (median, 104%; IQR, 90%–105%). Removal of oncology-HCT unit data decreased HO-CDI events (median, −15%; IQR, −14% to −21%) and decreased the SIR at all hospitals (median, −8%; IQR, −4% to −11%).
Conclusions:
For tertiary-care referral hospitals with specialized ICUs and a large number of ICU beds, the ICU bed adjustor functions as a global adjustment in the SIR calculation, accounting for the increased complexity of patients in ICUs and non-ICUs at these facilities. However, the SIR decrease with removal of oncology and HCT unit data, even with the ICU bed adjustment, suggests that an additional adjustment should be considered for oncology and HCT units within general hospitals, perhaps similar to what is done for ICU beds in the current SIR.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.