To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Patients with suspected pulmonary TB tuberculosis (PTB) often require scarce airborne isolation rooms; minimizing use depends on clinician understanding of sputum and bronchoscopic test characteristics. Limited knowledge can lead to over-testing and unnecessary isolation days, straining hospital resources. Objective: Evaluate the impact of a PTB screening algorithm on reducing unnecessary testing and excess isolation days in patients with low to moderate pre-test probability. Methods: The study occurred 2022–2024 at a 1,286-bed tertiary care hospital in Toronto, Ontario (~880 TB cases annually). Inclusion criteria included inpatients placed on airborne isolation for suspected PTB with orders for either ≥3 expectorated sputa, ≥1 induced sputum, bronchoscopy, or combinations thereof. Patients with suspected Mycobacterium avium complex were excluded. A positive case is TB PCR or culture positive. Harm is defined as PTB exposure due to premature discontinuation of isolation. The algorithm recommended clinicians to collect a single induced sputum for low/moderate-risk patients with additional testing reserved for high-risk cases. Results: A total of 1,152 samples were collected from 747 patients; 513 expectorated sputa (44%), 194 induced sputa (16.8%), 445 bronchoscopies (38.6%). The median isolation duration was 6 days and the turnaround time for results ranged from 3–11 days. The positivity rate was 0.2% for performing expectorated sputum first (1/513), 2.5% for performing induced sputum (3/118) first, and 1.8% for BAL performed first (3/169). When comparing repeated induced sputum testing, all the samples were positive from the first specimen (Figure 2). Conclusion: These findings illustrate the real-world implications of using a single induced sputum to rule out PTB in low/moderate pre-testing probability patients, potentially leading to the reduction in airborne isolation days. No added harm via patient exposures was detected with the use of this algorithm.
Background: The Republic of Korea ranks second among OECD countries for tuberculosis (TB) incidence. National TB control guidelines mandate latent TB infection (LTBI) screening and treatment for healthcare workers (HCWs), especially those in high-risk departments. At our 2,700-bed tertiary hospital in Seoul, annual LTBI screening and treatment have been actively implemented since 2017, targeting HCWs at elevated risk of TB exposure. This study evaluates LTBI conversion rates among high-risk HCWs and characteristics of HCWs with conversion (converters) over the past five years. Methods: Following national guidelines, HCWs were classified into three high-risk groups: those likely to have routine contact with pulmonary TB patients (Group A), those caring for immunocompromised patients (Group B), and those at risk of respiratory infections despite no routine TB contact (Group C). Annual screening included interferon-gamma release assay (IGRA) and chest radiography. HCWs with positive IGRA results (≥0.35 IU/m) were strongly encouraged to undergo latent tuberculosis treatment. We analyzed data from HCWs working in high-risk tuberculosis units who had worked for more than five years from 2020 to 2024. HCWs with prior IGRA positivity were excluded. Results: Among, 1467 HCWs, 15.9% (233/1,467) had been diagnosed with LTBI before 2020, while the cumulative LTBI conversion rate between 2020 and 2024 was 5.3% (65/1,234). The annual LTBI conversion rates ranged between 0.7% and 1.5%. The median age of converters was 42 years, significantly older than non-converters (median 38 years; P = 0.02). Male converters comprised 24.6% (16/65) compared to 14.6% (171/1,169) in the non-converter group (P = 0.03). Longer tenure was observed among converters (median 16 years) than non-converters (median 12 years; P = 0.01). Although medical technicians and emergency room staff exhibited higher conversion rates, these differences were not statistically significant. Among LTBI cases, 78.8% completed treatment, with 9.1% demonstrating reversion. The annual incidence of active tuberculosis among HCWs at our hospital significantly declined to an average of 0.2 cases per year between 2020 and 2024, compared to 4.4 cases per year between 2015 and 2019 Conclusions: Annual LTBI screenings revealed conversion rates of approximately 1%, primarily affecting older, long-tenured, and male HCWs. Active LTBI treatment effectively reduced the risk of active TB among hospital staff.
Background: Candida auris and methicillin-resistant Staphylococcus aureus (MRSA) are prevalent in nursing homes, and both are known to shed profusely from the skin. We evaluated the degree of differential shedding during caregiving activities versus at rest in nursing home residents. Methods: Residents at two nursing homes were screened for C. auris and MRSA using nares, axilla/groin, and peri-rectal swabs. Carriers of C. auris, some of whom also carried MRSA, were evaluated for proximal shed around their bed during rest and caregiving activities using chromogenic settle plates. Morning caregiving activities (e.g. hygiene care, linen/clothing change) were noted to generally take 12 minutes. For rest, settle plates were placed for a 12-minute period prior to the resident awakening in the morning. For caregiving, settle plates were placed for the 12-minute period of morning activity shortly after awakening. Twin rest-caregiving measurements were taken on three separate days per C. auris carrier. In addition, prior to caregiving, bilateral nares, hands, axilla, groin, and perirectal swabs were taken for C. auris and MRSA culture, along with an axilla/groin swab for measuring chlorhexidine concentration (CHG used for routine bathing). Logistic regression with person-level clustering analyzed associations between positive settle plates (“shedding”) and activity (caregiving versus rest), along with other adjusters. Results: The study included 23 C. auris carriers, 15 of whom carried MRSA. 65% were male, 91% had an indwelling device, 39% had wounds. Mean number of positive body sites was 2.3 for C. auris and 1.2 for MRSA. Median CHG concentration was 156 µg/mL (IQR=39-1250). Shedding occurred more frequently during caregiving versus rest for both C. auris (8/69 vs 1/69, P=0.02) and MRSA (15/69 vs 3/69, p=0.002). In multivariable models (Table), caregiving was associated with increased odds of shedding for both C. auris (OR: 9.25 (95% CI: 1.07-80.35), P=0.04) and MRSA (OR: 6.52 (95% CI: 1.72-24.78), P =0.01). Higher CHG concentrations were non-significantly associated with reduced shedding of both pathogens. Conclusion: C. auris and MRSA shedding increased significantly during caregiving activities, supporting CDC’s current recommendations for enhanced barrier precautions in nursing homes, which involve gown and glove use during high-contact care for carriers of multidrug-resistant organisms. Remarkably, shedding was readily detected within 12 minutes of morning caregiving, highlighting a rapid “plume effect” during resident care.
Background: Multidrug resistance remains one of the top global health threats and has been rising over recent decades, jeopardizing patient outcomes and increasing healthcare costs. This underscores an urgent need to design tools to optimize antibiotic prescribing to target these pathogens. Antibiograms are an essential antimicrobial stewardship tool used to provide guidance for empiric antimicrobial selection and information on local resistance. However, facility-level antibiograms are limited to individual institutions and do not reflect regional variations in resistance. Previous studies have demonstrated the feasibility and importance of generating regional antibiograms to better inform regional infection prevention and spearhead antimicrobial stewardship initiatives. Regional antibiograms also offer a valuable resource for community hospitals and health centers with lower pathogen prevalence and limited access to infectious diseases-trained personnel. This study aims to curate a regional antibiogram to analyze and understand antimicrobial susceptibility and resistance patterns of targeted pathogens across Metro Atlanta. Methods: This descriptive study aimed to evaluate antibiograms from multiple hospitals across the Atlanta metropolitan area. In September 2019, flagship hospitals of five different health-systems within metro Atlanta were surveyed using a questionnaire to collect information on basic facility and microbiology laboratory characteristics. Three health-systems responded, providing inpatient antibiogram data from the 2019 calendar year. These data were combined to create a single, cumulative antibiogram with 18 clinically relevant combinations of microorganisms and antibiotics. In total, data from 10 different hospitals were aggregated to create one regional antibiogram. Results: Data from 10 hospitals were combined to create one regional antibiogram with 18 organisms and 21 antibiotics. The overall prevalence of methicillin-resistant Staphylococcus aureus (MRSA) was 45.1% and vancomycin-resistant Enterococcus (VRE) was 15.8%, carbapenem-resistant Acinetobacter baumannii (CRAB) was 13.2%, carbapenem-resistant enterobacterales (CRE) was 1.1%, and carbapenem-resistant Pseudomonas aeruginosa (CR-PA) was 13.3%. Carbapenem resistance rates were compared between carbapenem-restrictive (n=4) and carbapenem-non-restrictive (n=2) hospitals. The prevalence of CR-PA was significantly higher in carbapenem-non-restrictive hospitals compared to carbapenem-restrictive hospitals, 19.6% vs 11.6% (p < 0 .001). Conclusion: The development of a regional cumulative antibiogram to capture resistance patterns of targeted pathogens across multiple health-systems in a large metropolitan area is feasible. Data from a regional antibiogram is useful in assessing susceptibilities and can serve as a valuable antimicrobial stewardship tool for institutions without access to their own specific antibiogram. Additionally, implementation of targeted stewardship policies, such as carbapenem restriction, demonstrates promise to potentially slow the development of resistant pathogens, thereby improving patient outcomes.
Background: Admission screening for CRO carriage may prevent transmission, but there is a lack of consensus on the best targeted approach. Using a well-characterized cohort of medical intensive care unit (MICU) patients prospectively screened for CRO carriage at time of admission (MAriMbA cohort), we compared the effectiveness of common targeted strategies (singly and in combination) available to hospitals in Illinois to identify MICU patients at risk for CRO carriage, including: (a) screening patients transferred from external facilities (e.g., short- and long-term acute care hospitals); (b) screening patients with a tracheostomy or pressure ulcer; or (c) querying the Illinois XDRO registry for prior CRO history. Methods: Results of rectal swab samples collected within 48 hours of MICU admission during 1/2017-1/2018 and cultured for CROs (carbapenem-resistant Enterobacterales [CRE], CR Pseudomonas aeruginosa [CRPA], and CR Acinetobacter baumannii [CRAB]) were used as the reference standard. Patients’ status as direct transfer from an external healthcare facility and presence of tracheostomy or pressure ulcer were collected prospectively during the MAriMbA study. History of CRO colonization before MICU admission was queried retrospectively from the Illinois XDRO Registry (xdro.org), with the limitation that most reports available during the study period were restricted to CRE. We evaluated each predictors’ independent association with admission CRO status and combined variables in a planned logistic regression modeling approach. Results: CRO colonization was detected in 37 (2.6%; including 26 CRE, 10 CRPA, and 1 patient co-colonized with CRE and CRAB) of 1,423 unique MICU admissions. For univariate analyses, presence of a tracheostomy (OR 9.32, 95% CI 4.29-20.27), presence of pressure ulcer (OR 3.07, 95% CI 1.42-6.64), transfer from an external healthcare facility (OR 1.97, 95% CI 1.02-3.82), and prior CRO history reported to the Illinois XDRO Registry (OR 72.96, 95% CI 25.83-206.07) were associated with higher odds of CRO colonization. A model combining these variables improved the predictive capability (AUC 0.73) (Table). Prior CRO history reported to the Illinois XDRO Registry identified 27% of CRO cases, with number needed to screen (NNS) of only 2 patients. Adding tracheostomy, pressure ulcer, and external facility transfer together improved detection of admission CRO cases to 68%, with NNS of 20 patients (Figure). Conclusion: In a region with well-established inter-facility communication of CRO history via the Illinois XDRO Registry, the addition of screening patients with a tracheostomy, transfer from an external facility, or pressure ulcer may improve early identification of CRO carriage at time of MICU admission.
Background: Nebraska (NE) Infection Control Assessment and Promotion Program (ICAP) is supported by the Nebraska DHHS Healthcare-Associated Infection (HAI) program via a CDC grant and works to assess and improve infection prevention and control (IPC) programs in all types of healthcare facilities. CDC recommends that outpatient healthcare facilities (OHFs) develop and maintain IPC programs; however, littleis known about the infrastructure of IPC programs in OHFs. NE-ICAP performed onsite assessments to review the implementation of best practice recommendations (BPRs) in these programs. Method: Onsite IPC assessments were conducted in OHFs from January 2020 to February 2024. The assessment questions were based primarily on the CDC 2016 Infection Control Assessment and Response (ICAR) tool, complemented by the CMS Hospital Infection Control Worksheet. Assessments included interviews and onsite observations. A total of 66 BPRs were assessed for implementation. Descriptive statistics were calculated using Microsoft Excel for assessment responses and demographic information. BPRs were classified based on hospital affiliation, accreditation status (based on certification by recognized accrediting bodies), and urban-rural designation (based on USDA rural-urban commuting area codes). The chi-square test for independence was performed in SPSS 20 to assess for statistically significant differences across these categories using a threshold of p < 0.05. Result: A total of 19 OHFs had onsite assessments. 42.1% had external accreditation, 77.8% had at least one individual trained in infection prevention regularly available, and 36.8% were considered urban (figure 1). Domains with the lowest compliance (percentage of BPRs in place) included injection safety (48.8%), device reprocessing (49.7%), and personal protective equipment (51.8%). Notable BPRs associated with less than 35% compliance are listed in figure 2. Accredited facilities demonstrated greater compliance with BPRs related to device reprocessing. Conclusion: Important IPC gaps exist in OHFs. Onsite assessments are crucial for evaluating IPC program infrastructure and highlighting areas for improvement. Further studies are needed to understand why accreditation is associated with better compliance with BPRs and the factors contributing to its success.
Introduction: Dust burden in healthcare institutions has been associated with invasive fungal disease (IFD) causing significant morbidity and mortality in immunocompromised patients. Systematically evaluating the impacts of architectural changes on air particulate concentration (APC) could identify risk reduction strategies.
Objectives: We estimated changes in APC in units adjacent to a temporary hard physical barrier in a previously open hospital space after its erection. We propose a model for evaluating the impact of temporary architectural alterations on APC in healthcare settings. Methods: Barriers were erected in an open area of a hospital for four weeks. The barrier partitioned the oncology floor from the atrium, which houses the emergency department waiting area. Continuous APCs were measured in multiple locations before and after the barrier was erected. We conducted an interrupted time series on the daily mean and maximum APCs, excluding the period the barrier was being installed. As a control, the same analysis was conducted on a remote location of the hospital. Results: A topographical representation of the impacted area is included in Figure 1. Regions A and B are in hallways, adjacent to the barriers, and region C is in a patient care area, separated from the barriers by an automatic door. The control region was the cafeteria, which is separated from the barrier space by several hundred feet. After barrier creation, there was an immediate APC reduction in region A from the predicted mean APC of 2µg/m3 to 1µg/m3 (difference of 0.8 µg/m3, p=0.01) (Figure 2). While the barrier was in place, there was a significant reduction in APC in region A by 0.01µg/m3 per day (p < 0 .001). There was no significant change in APC in regions B and C after the barrier was erected and while it was in place. In the control region, there was no significant change in APC at barrier placement nor afterwards. There was no change in the maximum APC at any of the measurement locations. Discussion: Our analysis demonstrated a change in APC at an adjacent area following erection of the barrier; however, APCs were not significantly changed in patient areas. This model could help objectively evaluate changes in particulate concentration. While this analysis cannot predict changes in IFD incidence, it could inform whether permanent architectural changes might reduce APC. Conclusions: We propose a model to evaluate changes in APCs from temporary architectural changes, which could inform permanent architectural changes.
Background: Surgical site infections (SSIs) are a serious complication following surgery. The emergence of multidrug-resistant pathogens has diminished the effectiveness of traditional antimicrobials necessitating a new approach to prevention and treatment. We are developing an innovative device (xIP) that uses UV-C to inactivate pathogens in surgical incision sites, mitigating the risk of developing an SSI.
Irradiation in the UV-C range (200-280 nm) is known to inactivate surface and airborne pathogens by damaging nucleic acids. However, there is a limited research on its effectiveness for surgical sites. Methods: A Krypton-Chloride Excimer (KrCl*) lamp (λpeak = 222 nm), a pulsed Xenon (PX) emitter (broad spectrum), and a UVC LED (λpeak = 282 nm) were evaluated. Inactivation of E. coli ATCC 29425 and MRSA USA300 was determined by in vitro exposure to UV-C at doses of 0 (control), 2, 5, 10, 15, and 20 mJ/cm2. Dosing was controlled by measuring irradiance (mW/cm2) from each lamp and calculating the time to reach desired exposure levels.
Microbial suspensions of log-phase cultures were pelletized and resuspended in phosphate buffered saline three times and diluted to 107 CFU/mL. After UV exposure, suspensions were plated on an agar substrate using a grid-based method. After incubating for 48 hours at 37°C, remaining viability was determined. Results: PX and KrCl* emitters exhibited 5+ log reduction for both microorganisms, while LED showed 4 and 4.5 log reduction against E. coli and MRSA, respectively. PX demonstrated the highest inactivation efficiency (log-reduction per unit dose), followed by KrCl* and LED. Conclusions: In-vitro data suggest that surgical sites could be effectively treated in less than a minute with a small hand-held device and in less than 10 seconds with a larger device. Inactivation of MRSA using a superficial wound model in hairless SHKI1-elite mice (Charles River strain code 477) is in progress. In-silico modelling using optical raytracing is in progress to understand the impact of wound and skin micro-environment on the performance of the device. These data will inform ex-vivo testing using porcine or cultured human skin (EpiDerm FT) models to evaluate the performance in different wound types including incisions, abrasions, and burns, as well as the impact of fluids like saline and blood. Development of the xIP device is underway in collaboration with healthcare professionals to produce a product that is effective, fits into current practice, and user friendly. Upon successful completion of a prototype device, clinical efficacy will be explored.
Background: Traditional infectious disease surveillance data have significant lag time limiting their usefulness in infection cluster detection in healthcare settings. Digital twin spatial representation, electronic healthcare data integration and surveillance automation allow for timely cluster detection and facilitate faster outbreak mapping and contact tracing, better informing infection prevention practice. Method: 4-Dimensional Disease Outbreak Surveillance System (4D-DOSS) is an automated infectious disease surveillance system developed in Singapore General Hospital (SGH), a 2000-bed tertiary healthcare institution. Electronic patient data (bed allocation and laboratory test results) are integrated onto a digital twin of SGH, and surveillance algorithms are applied for routine surveillance and contact tracing. 4D-DOSS was operationalized in SGH and National Heart Centre Singapore (NHCS) on August 1st, 2024. Active surveillance for carbapenemase producing enterobacterales (CPE) in SGH and NHCS includes contacts of inpatients with CPE carriage. Contact tracing for CPE is done on 4D-DOSS. Primary and secondary contact tracing are algorithmically automated. Spatial and temporal patterns are analyzed to understand transmission networks in outbreaks. Automated email alerts can be sent to clinicians to notify significant test results. Results: Contact tracing typically takes two hours per index patient using traditional methods. Contact tracing for CPE using 4D-DOSS takes five minutes per index patient, and multiple index patients can be traced per trace. Based on about 50 COVID-19, CPE and VZV combined exposure events per week in 2023, at 1.92 hours saved per exposure event, there would be a saving of 648 FTE per year, Between August 1st, 2024 and December 31st, 2024, there were eight VRE, eight CPE and 17 acute respiratory viral infection (RVI) clusters in inpatient wards. Selected clusters were viewed during weekly epidemiology rounds to get a better understanding of the transmission network. Outbreak mapping of infection clusters using traditional methods can take up to two days whereas each cluster can be analyzed in 4D-DOSS in under one hour. If four outbreaks are mapped per year, at 47 hours saved per outbreak mapped, the estimated FTE saved is 24 per year.
4D-DOSS has been configured for email alerts for acute RVI in patients in a selected ward since the last week of December 2024. Seven alerts were received in the first week of implementation. Conclusion: The comprehensive digital twin-enabled infectious disease surveillance platform enabled an efficient contact tracing and outbreak mapping system and automated surveillance alerts facilitating timely infection prevention measures. This can potentially improve patient outcomes.
Background: The pharmaceutical industry is estimated to have a larger environmental footprint than the automotive industry. Discarded and unused doses of pharmaceuticals generate financial waste and pollution, and exacerbate antibiotic shortages. The antibiotic daptomycin is dispensed in standard-sized single-use vials and dosed based on patient weight. Residual daptomycin in the vial after dose preparation must be disposed of and cannot be used for another patient. We hypothesized that daptomycin dosing nomogram use would reduce daptomycin waste, environmental impact, and financial costs. Methods: We performed a retrospective chart review quantifying daptomycin waste, defined as disposed of unused daptomycin, at Harbor-UCLA Medical Center, a 400-bed Level 1 Trauma Center, from 1/1/2023 to 12/31/2023. We then adjusted dosing using a daptomycin dosing nomogram. We modeled the difference in daptomycin waste (mg of daptomycin disposed of unused), pharmaceutical waste (weight of excess daptomycin vials required due to wasted antibiotic), and cost between the two dosing strategies. Our model assumed a daptomycin vial weight of 16.8g and cost of $30 per 500mg daptomycin vial. We conservatively estimated pharmaceutical waste as waste only from daptomycin vials, ignoring all other supplies and materials necessary to prepare daptomycin. Results: During the 1 year time period at our Medical Center, 138,882mg daptomycin was wasted. This level of daptomycin waste equates 4671g excess pharmaceutical waste and $8332 spent on unused, discarded daptomycin. In our model, we found that nomogram implementation would have reduced mean monthly daptomycin waste from 11,002mg to 1387mg (p<0.001). This reduction would have decreased the proportion of daptomycin wasted from a mean of 19% to 3% of all consumed daptomycin (Figure 1). Nomogram use would also have saved $7333 and averted 4111g of pharmaceutical waste in 2023. Conclusion: A daptomycin dosing nomogram would have prevented 122,322mg of daptomycin from being wasted and saved over $7000 at a 400 bed Medical Center over one year. Given the 4111 g of pharmaceutical waste is a conservative estimate, and ignores waste from other supplies/materials as well as upstream waste and emissions from daptomycin manufacturing, the overall generated environmental impact prevented by nomogram use is likely significantly higher. Our findings demonstrate that intentionally designed dosing strategies aimed at reducing drug waste can save hospital costs and reduce the environmental footprint of clinical care. When implemented at large health systems these strategies are likely to result in substantial cost savings and reduction in the negative environmental impact associated with pharmaceuticals.
Background: According to the Centers for Disease Control and Prevention (CDC), carbapenem-resistant Enterobacterales (CRE) are an urgent public health threat. The CDC states the most common or ‘Big Three’ CRE are Escherichia coli, Enterobacter species, and Klebsiella species. States look at the ‘Big Three’ for guidance when setting reportable condition criteria for CRE. Evaluating trends of the non-‘Big Three’ genera is critical to ensure surveillance efforts are focused on priority targets. Thus, CRE genera trends were evaluated to verify the fitness of CRE surveillance reporting recommendations. Method: The Antimicrobial Resistance Laboratory Network (ARLN) Southeast region (SER) includes Alabama, Florida, Georgia, Louisiana, Mississippi, Tennessee, and Puerto Rico. All CRE is reportable in Tennessee (TN) and isolate submission is required to ARLN. Other jurisdictions submit CRE to the TN regional lab. Submitted CRE cases to ARLN from 2018 – 2023 were analyzed. CRE cases were defined as an Enterobacterales organism resistant to one or more carbapenem, excluding imipenem for Proteus sp., Providencia sp., or Morganella sp. due to intrinsic resistance. Data was cleaned in SAS v9.4 to provide descriptive CRE statistics. Result: The top three genera for TN fluctuate between Enterobacter sp., Klebsiella sp., Proteus sp., and Escherichia sp. In 2022, Proteus sp. (n=132) had twice the incidence of Escherichia sp. (n=65) in TN. There was an overall increasing trend of Proteus sp. from 2018 – 2023. The largest increase of Proteus sp. in TN was seen between 2018 (n=23) and 2021 (n=183). However, the prevalence sharply decreased between 2021 (n=183) and 2023 (n=12). Proteus sp. was 17% (n=627) of all CRE cases (n=3625) in TN from 2018 – 2023, while the “Big Three” was 72% (n=2628). In contrast, Proteus sp. was only 3% (n=35) of all CRE cases (n=1400) in the SER excluding TN from 2018 – 2023, compared to 89% (n=1250) for the “Big Three”. Conclusion: CRE surveillance identified an increased overall prevalence of Proteus sp. in TN between 2018 and 2022 despite not being included in the ‘Big Three’. While there was a large increase of Proteus sp. observed in 2021, the increase was limited to TN, and the subsequent decline suggests this is an outlier. Jurisdictions outside of TN often only submit Carbapenemase-producing CRE to ARLN as not all jurisdictions have CRE as a reportable condition. Results of this analysis suggest the SER should continue to monitor CRE and Proteus sp. to note if there is an increasing overall trend to better inform isolate submission strategies.
Background: Duplicative laboratory testing is prevalent in health care. Prior research surrounding repeat urine cultures showed that when a negative index culture is repeated within 48 hours, less than 5% of repeat urine cultures show a new bacteriuria. We evaluated the diagnostic yield of repeating urine cultures at longer time intervals, and of repeating a positive urine culture. Methods: We conducted a retrospective study of adult inpatients at Stanford Healthcare who had more than one urine culture collected during hospitalization between January 2023 and February 2024. We included urine cultures that were collected with or without urinary catheters; nephrostomy tubes were excluded. Urine cultures were classified as index or repeat. We analyzed the diagnostic yield of the repeat urine culture, defined as the percent of repeat urine cultures that detected a new bacteriuria not detected in the index culture. Bacteriuria was defined as growth of a bacterial species in quantities >100,000 CFU/mL. A negative urine culture was defined as one that did not have bacteriuria meeting this threshold. Sensitivity analyses used a threshold of 10,000 CFU/mL as the threshold for significant bacteriuria. Results: Overall, 6,955 urine cultures were performed from 6,058 patients. Of these, 864 (12%) urine cultures were repeats. Of the 864 index cultures, 75% were negative. The median time to repeat urine culture was 4 days. When negative index cultures were repeated at 0-3 days, the diagnostic yield for detecting a new bacteriuria was only 9%. Diagnostic yield at 3-6 days was 10%, not significantly higher compared to 0-3 days (p=0.620). Diagnostic yield at 6-9 days was 19%; this increase was significant compared to the 0-3 days group (p=0.014). When positive index cultures were repeated at 0-3 days, the diagnostic yield for detecting a new bacteriuria was only 8%. Diagnostic yield at 3-6 days was also 8%. Yield increased significantly to 15% at 6-9 days from index culture (p=0.013). When the threshold for significant bacteriuria was adjusted to 10,000 CFU/mL, more bacteriuria was detected overall, but primarily of gram-positive organisms. Whether the threshold for significant bacteriuria was 100,000 CFU/mL or 10,000 CFU/mL, the rate of detection of new gram-negative bacteriuria was similar, and remained less than 10% until 6-9 days from index culture (Figure 1). Conclusions: Among inpatients, most urine cultures repeated at less than 6 days provide redundant information. This unnecessary retesting offers an opportunity for diagnostic stewardship.
Youth exposed to poverty and adversities like violence are at higher risk of mental health problems (MHP), but whether antipoverty interventions can reduce this risk remains unclear. We examined the association between participation in the Brazilian Cash Transfer Program (BFP) and mental health of children/adolescents exposed to different levels of adversity.
Methods
Observational study using nearest-neighbor propensity score matching to compare BFP participants and non-participants from the Itaboraí study, a community-based cohort of 1,189 children/adolescents (6–15 years) assessed at two waves (meaninterval: 12.9 months).Measures included the Child Behaviour Checklist (CBCL) externalizing, internalizing, and total problems scales; an adversity score derived from a confirmatory factor analysis on violence victimization at home (WorldSAFE), school (threat/maltreatment/being chased by peers) and community (Survey of Exposure to Community Violence), and stressful life events (UCLA Posttraumatic Stress Disorder Reaction Index); and BFP exposure for at least 12 months (yes/no). Latent change score models tested whether BFP participation predicted changes in CBCL T-scores, moderated by adversity levels.
Results
A total of 330 BFP participants were matched with 330 non-participants with similar sociodemographic characteristics. Decreases in total (b=−0.124, SE=0.034, p<0.001), externalizing (b=−0.122, SE=0.036, p=0.001), and internalizing problems (b=−0.141, SE=0.033, p<0.001) between baseline and follow-up were observed among BFP participants exposed to higher levels of adversity compared with non-participants.
Conclusions
BFP participation was associated with reduced MHP only among children/adolescents facing high adversity, suggesting the program may help break the cycle between poverty and mental health problems—but benefits are concentrated among the most vulnerable.
Background: In children, penicillin allergy labels (PALs) are pervasive and persistent, despite linkage to suboptimal antibiotic selection with higher risk of side effects, increased length of hospitalization, and increased risk of harm throughout life. Up to 10% of children are labeled with PALs, yet over 95% tolerate the medication when tested. Parents might not always know that PALs are over-reported or incorrectly diagnosed. We aimed to examine parent and guardian perceptions of PALs and their attitudes towards delabeling. Method: We invited all English and Spanish-speaking parents of children presenting to two pediatric primary care locations in the northeast U.S to participate in an online, investigator-developed survey. Survey recruitment was passive, with parents discovering the survey through English and Spanish posters in the waiting and examination rooms. The survey included an initial screening question to identify whether a penicillin allergy was present. If the parent answered “yes,” they were instructed to proceed with survey completion. The survey consisted of 32 questions (7 reaction history, 9 perceptions, 5 provider interaction, 4 general knowledge, 6 demographics and one open-ended). We used descriptive statistics to analyze the data. Result: After screening, we received 54 completed responses. Most respondents had a college degree or higher (75%). When asked about the reaction, the majority occurred in those ≤ 2 years of life (55%); the predominant symptom reported was rash (92%). Twenty-nine percent of patients were evaluated in an urgent care or emergency room. Parents reported being very concerned by the reaction to penicillin (79%). When asked if their child would have a reaction if re-prescribed penicillin, none disagreed. Only 38% did not think allergies were permanent. Most families had not been offered penicillin testing (82%), although 67% expressed interest in the testing process, and 64% planned to inquire about testing following our survey. The majority (89%) would not agree to removing PALs without testing, citing fear that the child would have an allergic reaction if given penicillin (60%) and needing more information (25%) as the reasons for lack of agreement with PAL removal without testing. Conclusion: Among this highly educated population, parents expressed concerns at the initial reaction, perceived the reaction would reoccur with future penicillin use, and stated interest in testing, but were reluctant to delabel from history alone. Parents are untapped partners in delabeling; interventions are necessary to enhance parental understanding of the impact of PALs and the potential for delabeling with low-risk allergies.
Background: Serratia marcescens, a recognized environmental pathogen, often contaminates hospital water systems. Infections are typically exogenous, with occasional human reservoirs. NICU outbreaks can result in serious nosocomial infections, including meningitis, bacteremia, and conjunctivitis. Sources include contaminated medical devices, solutions, and hospital water systems, specifically sink traps and outlets, with transmission occurring directly or indirectly via aerosolization. Outbreak Description: Between June 2023 and December 2024, an outbreak of S. marcescens occurred in our hospital’s neonatal facility (Figure 1). The facility has 3 main halls - the NICU, intermediate, and “cradle” rooms, along with a breastfeeding, medication, and incubator cleaning room, containing 17 sinks in total (Figures 2 and 3). The outbreak was identified following two Serratia bacteremia cases in early 2024 and a retrospective review revealing 8 positive ocular cultures since mid-2023. After initiating the outbreak investigation, enhancing infection control measures, and conducting engineering repairs, the case rate decreased significantly. However, three additional bacteremia cases, and a urine culture, were subsequently identified. Infection Control Measures: Control efforts targeted two reservoirs: patients (via healthcare worker transmission) and the environment. Key measures included reinforcement of hand hygiene, aseptic breastfeeding techniques, contact precautions, and environmental disinfection protocols. Bathing was standardized using sterile water. Environmental Sampling and Investigation: Given Serratia’s known association with waterborne contamination, environmental sampling focused on sink traps and outlets across all areas, revealing persistent contamination despite repeated treatment with concentrated chlorine (Table 1). Epidemiological data identified temporal and spatial correlations between contaminated sinks and clinical cases, notably involving faulty plumbing adjacent to NICU sink 3/4 (Table 2). Water leakage and back pressure from a blocked pipe were hypothesized to cause aerosolization from the connected sink causing infections. Microbiological biotyping clustered clinical and environmental isolates, further implicating aerosolized contamination, including from a sink in the incubator cleaning room used to dispose of hospital wastewater (Figure 4). Outbreak Control: Despite pipe repairs and decontamination, sink contamination recurred due to Serratia’s ability to colonize biofilms in water pipes. Expert consultation emphasized “sink hygiene,” including minimizing equipment storage near sinks, distancing neonates and incubators, and avoiding procedures adjacent to sinks. Outcome: This multifactorial approach significantly reduced clinical cases. Continuous environmental monitoring and education aim to eliminate Serratia as a recurring threat in our neonatal facility and broader hospital environment. Conclusion: This outbreak highlights the challenges of controlling waterborne pathogens in hospital settings and underscores the importance of combining engineering, environmental decontamination, and behavioral interventions to
Background: Procedures performed at Ambulatory Surgical Centers (ASCs) have increased over the last decade in the United States. In Tennessee, surgical site infection (SSI) outbreaks in ASCs have been increasingly detected. Still, there is no mandated SSI reporting for ASCs through the National Healthcare Safety Network (NHSN) as there is for Acute Care Hospitals (ACHs). In 2023, the Tennessee Department of Health’s Healthcare-Associated Infections (TDH HAI/AR) program responded to an outbreak of 14 nontuberculous mycobacteria (NTM) periprosthetic joint infections at an ASC. Despite extrapulmonary NTM being a reportable condition in Tennessee, detection of this outbreak was delayed due to gaps in reportable conditions practices at this ASC. Here, we evaluate how NHSN reporting could have impacted the surveillance and detection of infections for this investigation. Methods: Extrapulmonary NTM cases were detected through clinical laboratory and provider reporting. Chart abstractions were performed for cases by HAI/AR epidemiologists using a tool adapted from the Centers for Disease Control and Prevention (CDC). Infections were evaluated using standardized 2023 and 2024 National Healthcare Safety Network (NHSN) definitions depending on the infection date of event. Results: Initial reporting of cases was as mentioned above, resulting in five cases reported together in June 2023, two months after the first positive specimen. Eight (57%) cases met the NHSN definition for Surgical Site Infections (SSIs); four (29%) cases met the criteria for Deep Incisional SSIs, and four (29%) met the Organ/Space SSI. Six cases (43%) were not detected within the 90-day surveillance window; however, three of these cases had documented evidence of superficial infection within those 90 days. Conclusions: Despite its slow infection progression, most NTM infections in this outbreak would have been detected through NHSN surveillance. Even in cases where NHSN SSI criteria were not met, reviewing records and entering data within the NHSN framework may have facilitated faster facility-level detection. Although the nature of NHSN reporting is not suited for rapid detection of outbreaks, the standardized definitions, regular records reviews, and established data entry system would benefit ASC surveillance such as the facility described here, which had no formal mechanism for tracking infections. Additionally, the collection of summary data required through NHSN would better identify reporting gaps prior to outbreak occurrences. The availability of SSI data for ASCs would help public health authorities identify and assist facilities in assessment and prevention activities. Patient safety would thus likely benefit from enhancing surveillance of ASCs through voluntary or
Background: Since 2013, the Australian Hospital National Antimicrobial Prescribing Survey (Hospital NAPS) has provided a standardized framework for hospitals to assess the quality of antimicrobial prescribing. As part of the program’s continuous quality improvement, a revised appropriateness algorithm was developed and is scheduled for implementation in 2025. This study aims to validate this algorithm by evaluating accuracy and inter-rater reliability (IRR) in assessing guideline concordance and appropriateness. Methods: A prototype of the revised assessment algorithm was developed using Qualtrics®, including an assessment of antimicrobial-level guideline concordance, appropriateness and reasons for non-optimal prescribing, as well as overall indication-level guideline concordance and appropriateness. An eLearning module was developed to ensure consistency of training for assessors. Fourteen clinical vignettes (ten general and four specialist) across a range of real-world clinical scenarios and with varying levels of complexity were developed. Gold standard assessments were determined by an independent group of infectious diseases (ID) and antimicrobial stewardship (AMS) clinicians. Existing Hospital NAPS users were invited to participate. General vignettes were split into two equal groups and assigned to assessors in an alternating manner. Those with expertise in haematology/oncology or paediatrics were assigned additional specialist vignettes. Results were analyzed for accuracy against the gold standard, and for IRR using Fleiss’ Kappa coefficient. Results: A total of 102 assessors, across a range of professions, remoteness areas and years of auditing experience, completed their assigned vignettes. Assessors correctly identified the antimicrobial regimen for auditing in 91.9% of assessments, with incorrectly identified assessments excluded. A total of 681 antimicrobial-level and 534 indication-level assessments were analyzed. Figure 1 summarizes the accuracy and IRR for the main outcome measures of guideline concordance and appropriateness. Accuracy and IRR were higher for appropriateness compared with guideline concordance, and at the overall indication-level compared with the antimicrobial-level. Auditors correctly identified all gold-standard reasons for non-optimal prescribing in 68.3% of assessments. Across all measures, accuracy and IRR was higher amongst assessors with specialist ID/AMS experience compared to those without, from metropolitan compared with regional settings, and amongst those with 4 or more years of auditing experience. Pharmacists without ID/AMS expertise scored as highly as doctors and pharmacists with ID/AMS expertise. Conclusion: The revised Hospital NAPS algorithm provides a valid measure of guideline concordance and appropriateness. Higher accuracy and IRR were observed for appropriateness compared with guideline concordance, highlighting the importance of appropriateness as a measure for stewardship surveillance in reflecting quality of patient care.
Background: In response to a second multistate extrapulmonary tuberculosis (TB) outbreak in 2023, linked to contaminated viable bone allografts, public health authorities conducted contact investigations (CIs) to assess TB transmission among healthcare personnel (HCP) potentially exposed to contaminated grafts during surgeries or draining infected surgical sites. Method: A HCP-CI was initiated after Centers for Disease Control and Prevention (CDC) notification that three San Diego county hospitals had received contaminated allografts. Healthcare facilities identified and tested HCP potentially exposed and reported results to the health department. We reviewed the CI processes of each facility and outlined challenges encountered and solutions implemented. Result: HCP-CIs were conducted at five facilities: three hospitals where nine patients received contaminated product; a hospital where revision surgery was performed; and a skilled nursing facility (SNF) that provided postoperative care. We encountered several challenges during the CIs. First, 234 HCPs were potentially exposed based on a framework used in a prior (2021) TB bone allograft investigation. We advised a tiered approach with targeted follow-up of 72 HCP with high-risk exposures (for example, staff directly handling the allograft). Second, the SNF CI was complicated by administrative staff turnover, no SNF point-of-contact, and investigations involving different HCPs during three separate admissions. Substantial public health resources were required over 7 months, including a site visit to interview HCP with positive TB tests and obtain accurate CI information. Third, obtaining CI data was slow and inconsistent. Reasons included lack of a standardized data collection tool during the initial phase of the CI, fragmented information gathering across clinical departments within facilities, lack of responses from licensed practitioners who were not employees (e.g., physicians), variability in notification of exposed HCP for testing, and follow up of HCP that weren’t tested. HCP-CI results were not readily available when requested, leading to confusion, repeated requests, and duplication of efforts. We countered these challenges by leveraging established (or new) relationships with facility leadership and involving multidisciplinary staff to obtain results. Public health recommended facilities contact non-responsive practitioners with high-risk exposures by certified letter notifying them of the exposure and testing recommendations. Conclusions: Close collaboration, communication, and coordination between public health and each healthcare facility’s clinical services and leadership were critical in this HCP-CI. Utilizing a tiered approach streamlined the CI. This complex HCP-CI spanned multiple facilities and could have benefited from early identification of consistent points-of-contact at each facility and use of a standardized data collection tool.
Background: Candida auris(CA) first recognized in the US in 2013, can be resistant to all major antifungal agents limiting treatment options. To decrease its spread, guidelines indicate patients with CA, if admitted to hospital, should be placed in isolation and considered positive indefinitely. Screening around newly identified patients is recommended. Methods: We review CA history in our facility, including colonizations, infections and screening/isolation protocols from 2019-2024. Results: In late 2019,a patient with a CA infection was transferred to our hospital. It was late 2022, before two additional patients with CA were admitted to our faciility from a long-term acute care facility (LTAC) that had a newly recognized CA cluster of cases/colonizations. Screening in our facility did not identify additional cases/colonizations (n =49) at that time. Patients with known infection or colonization were placed in contact isolation. Additional LTAC/LTCFs were recognized from which patients with CA were routinely identified and admitted to our facility Patients from these facilities were deemed high risk (HRP) and were preemptively placed in isolation and screened for CA. From March, 2023 through August 2024,patients with CA or HRPs were placed on a cohorted ward in contact isolation. Cohorted isolation was continued on high risk but screening negative patients until three screening tests were negative and they were no longer at the high risk LTAC/LTCF. Providers were notified by email or through electronic record of patients status and reminded of infection control measures to follow. Any patient with CA had their electronic record flagged for contact isolation in the event of readmission. Screening specimens for colonization were sent to an outside laboratory until August, 2024 when in-house testing became available. With in-house testing, screening results became available in 1-2 days rather than 3-4 days. A new protocol was started and only placed patients with known positive cultures in the cohorted ward. (see image) HRPs are placed in isolation wherever in the hospital they are admitted until screening results are known. If results are positive for CA they are transferred to the cohorted ward. Additionally rooms are cleaned with appropriate disinfectants for surfaces and floors. From December, 2022, through August, 2024 we identified 22 unique patients with cultures from clinical isolates. Specimens included nine cultures from blood, including three of hospital onset. Three cultures were from wounds, one was hospital onset. Other cultures were 1 from bone, 1 from pleural fluid and 9 from urine. 106 patients were identified as colonized with screening. Conclusion: Screening, isolation and cohorting have all been tools for managing CA in our facility. Only three hospital onset CA bacteremias have been identified with those protocols.
Background: Candida auris is an emerging multidrug-resistant fungus recognized as a global health threat. Despite increasing rates of colonization, no standardized protocol exists in the United States for C. auris screening upon admission. In February 2023, the University of Kentucky Healthcare (UKHC) implemented a targeted C. auris screening system for select high-risk patients. Methods: This retrospective observational study was conducted at UKHC, a 1,086-bed academic medical center, using data from patients aged ≥18 years screened for C. auris between July 1, 2021, and June 30, 2024. Prior to February 2023, C. auris screening occurred only during outbreak investigations. Post-implementation, screening was expanded to include ICU admissions, patients from external facilities with wounds or tracheostomies, and patients with a history of carbapenem-resistant organism infection. Axillary and groin swabs were tested via polymerase chain reaction (PCR). Cases were classified as community-onset (CO) Results: Of 13,642 C. auris tests performed, 70 positive cases were identified: 13 cases (6 CO, 7 HO) pre-implementation and 57 cases (31 CO, 26 HO) post-implementation (Figure 1). The mean age was 60.24 years, and males comprised 57.75%. The monthly positivity rate post-implementation ranged from 0% to 2.18% (with a mean of 0.96%). Among the 70 cases, 10 (14.29%) were classified as clinical infections, and 60 (85.71%) as colonization. The primary indications for C. auris screening included ICU admission (42.86%), point prevalence surveys (17.14%), and admission from external facilities with wounds (5.72%). No significant differences were observed between clinical and colonized cases by age, gender, race, or most other comorbidities. However, clinical cases were more likely to have diabetes (90% vs. 48.33%, p=0.0143) and medical device usage, including tracheostomy (80% vs. 45.00%, p=0.0404), gastrostomy tubes (90% vs. 53.33%, p=0.0293), central lines (60% vs. 41.67%, p=0.2799), and urinary catheters (60% vs. 46.67%, p=0.4348). Among ten clinical cases, seven patients received antifungal treatment. Three patients did not receive any treatment since C. auris was not considered clinically significant. 30-day mortality was higher among clinical cases compared to colonized cases; however, the difference was not statistically significant (30% vs. 25%, p=0.7377). Conclusions: The implementation of a targeted C. auris screening program at UKHC has provided critical insights into epidemiologic trends, patient demographics, and risk factors. Understanding these factors is essential for optimizing infection prevention strategies, refining screening protocols, and informing public health efforts to mitigate the spread of C. auris in healthcare settings.