To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: Within our healthcare system, hospitalists receive feedback on antibiotic prescribing via an observed-to-expected ratio (OER) calculated by days of therapy (DOT) for CDC defined broad-spectrum, hospital-onset (BSHO) antibiotics and adjusted for patient characteristics and billing. In this sub-analysis, we quantify the impact of infectious disease (ID) consultations on OER. Methods: For each two-month period in five hospitals, encounters were assigned to each hospitalist if they billed for ≥1 day of care. The encounter was considered to involve an ID consult if an ID provider billed during the encounter. Percent of encounters with ID consultation (density) was calculated and stratum defined by gross ratios (e.g., 1 in 3 or 1 in 4 patients). We assessed whether consult density varied overtime, by facility, or by DOT. We assessed the effect of consult density on antibiotic DOT using established linear mixed effects model with random intercepts for both provider and facility (nested) and adjusted for patient characteristics and billing. Distribution of OERs were compared among strata to evaluate how ID consult changes OERs. Results: Between January and June 2023, 154 unique providers collectively received 458 bi-monthly OERs reflecting their care for 53,815 unique patients. Overall, 21% of hospital medicine patients were evaluated by an ID consultant during inpatient stay; median consultation density varied among providers by facility (19%-26%, Figure 1). Multivariate models (accounting for sepsis, UTI, renal disease) estimated significantly increased DOT for hospitalists having ~1:3 (+3.4 DOT, 95% CI 0.9 – 5.9) or 1:4 (+2.7 DOT, 95% CI 0.4-5.0) patients with ID consults compared to hospitalists with fewer than ~1:7 with an ID consult; however the effect was not significant in other strata and not linear (Table 1). Calculating the distribution of OERs both before and after adjusting for consult density resulted in small changes in OERs (Figure 1b). Discussion: The frequency of ID consults affected hospitalists’ BSHO-DOT in a non-linear fashion. Impact of ID consultation on prescribing metrics should be considered in building credibility of stewardship prescribing performance metrics.
Introduction: The healthcare sector contributes significantly to global greenhouse gas (GHG) emissions, accounting for 8.5% of the total emissions in the United States alone. Infection control practices designed to prevent disease transmission contribute to this substantial carbon footprint. These practices, which include enhanced ventilation requirements, extensive sterilization processes, and laundry services, inherently increase energy consumption and GHG emissions. The approach to making healthcare more sustainable has been multipronged, including initiatives to reduce waste and energy use. We explored environmental cleaning practices and the energy use associated with ultraviolent (UV) light disinfection. Methods: A retrospective analysis was conducted on the energy consumption of three different UV light disinfection machines used at Stanford Health Care from September 2023 to August 2024. Annual run time data was obtained from vendor-provided logs, and energy use was calculated using the equipment wattage specifications provided with each machine. Results: We found that UV light disinfection utilized approximately 7,300 kWh constituting less than 1% of Stanford Health Care total energy use for the period. This energy consumption equates to the charge required for 3.5 round trips from San Francisco to New York City in an electric vehicle. Discussion: UV light has been widely used in healthcare over the last decade. Recent data suggests that there may be no additional benefit to UV light disinfection when other enhanced cleaning methods, such as sporicidal cleaners, are utilized. Therefore, using UV light in addition to sporicidal cleaners may be redundant. Infection prevention practices often incorporate redundancies given the dependence on human behavior and the high consequences of practice failures. As the healthcare industry continues to work towards reducing greenhouse gas emissions it will be important to consider all energy reductions and any redundancies in practices. In the future, a life cycle analysis could be conducted to compare UV light disinfection and sporicidal cleaning methods to evaluate each practice’s impact on sustainability efforts. Our evaluation showed that UV light disinfection results in modest energy usage reductions. However, as the healthcare industry continues to work towards reducing greenhouse gas emissions it will be important to consider all energy reductions and any redundancies in practices.
Background: Conferences play a crucial role in the early dissemination of significant research to peers and experts within the same field. They provide a platform for receiving feedback, fostering collaborations, and refining groundbreaking findings, which can eventually be developed into full articles for publication in peer-reviewed journals. The transition of presented abstracts to full research journal publications is a key metric for evaluating research productivity, quality, and dissemination. Despite this, there is limited data on the proportion of abstracts that are ultimately published as full articles in peer-reviewed journals. Method: All abstracts (351) presented at the SHEA Spring Conference in 2018 and 2021 were indexed and cataloged from the 2018 online archive and the 2021 Antimicrobial Stewardship & Healthcare Epidemiology journal supplement. We then manually searched the top 20 results of both Google Scholar and PubMed to determine the publication status of each abstract as of Jan 10, 2025. Publication status criteria included: matching at least three keywords between the abstract and any resulting manuscript, having at least one common author, and publication occurring after and inclusive of the year of abstract acceptance. Data was compiled into an Excel spreadsheet, categorizing abstracts as ‘yes’ or ‘no’ for publication. Publication rates were then calculated using Excel formulas based on these categorizations. Factors associated with publication were evaluated, and publication metrics were described. Result: All 351 abstracts were analyzed. Among these, 175 (49.9%) were published as full articles in peer-reviewed journals indexed in Google Scholar or PubMed. Abstracts presented in 2021 and those presented orally had higher publication rates, though the association was statistically nonsignificant (p = 0.06 and p = 0.66, respectively). Abstracts with authors from different institutions and those with more than six authors showed a statistically significant association with higher publication rates (p = 0.002 and p = 0.003, respectively). Infection Control & Hospital Epidemiology was the most common journal in which abstracts were ultimately published, accounting for 51 (29.1%) of the publications. The publication rates surpass those reported in most similar studies of other internal medicine and subspecialty conferences, including IDWeek. Conclusion: Approximately half of the abstracts presented were subsequently published as full articles. Collaborative research, involving more authors and authors from different institutions, was associated with a higher publication rate. These findings highlight the strong academic impact of SHEA-presented research. Further research into the barriers to publication is warranted to improve the dissemination of conference abstracts.
Background: Urinalysis (UA) with reflex to urine culture (UARC) protocols aim to optimize diagnostic testing and reduce unnecessary antibiotic use in hospitalized patients. By limiting urine cultures to cases where initial urinalysis results meet predefined criteria, UARC protocols can minimize false-positive results and reduce overtreatment. This study examines the impact of a UARC protocol implemented across a hospital system on urine culture volume and antibiotic utilization. Methods: A UARC protocol was implemented at our institution, performing urine cultures for UA specimens with ≥5 WBC/HPF. This study was an interrupted time-series analysis that compared the pre-implementation period (October 2022–June 2023) and the post-implementation period (January 2024–September 2024), with data elements abstracted from the electronic medical record. Antibiotic exposure within 48 hours before and 168 hours after urine specimen collection was evaluated. Comparisons were made using chi-square and Wilcoxon rank-sum tests, with p-values A total of 107,646 urine specimens were analyzed with 49,504 in the pre-implementation period and 58,142 post-implementation. Following UARC introduction, only 51.3% of reflex orders continued on to urine culture (29,836/58,142). Overall urine specimen orders resulting in antibiotic utilization decreased from 39.2% to 33.5% (p Implementing a UARC protocol significantly reduced urine culture volumes and antibiotic utilization, demonstrating its effectiveness in diagnostic and antimicrobial stewardship. While overall antibiotic use decreased, the unchanged treatment duration among recipients suggests complete courses were maintained with the reduction in culture orders serving as the mechanism driving this change. These findings support UARC protocols as valuable tools for reducing antibiotic use and optimizing healthcare resources. Further research should refine reflex criteria and assess long-term clinical outcomes.
Background: Hospitals experienced increased demand for acute care and specialty services during recovery following COVID-19 epidemics. Internal analysis identified potential inaccuracies in NHSN location unit designations across a large healthcare system with 2,249 mapped NHSN locations. Findings revealed inconsistencies in how location change decisions were determined mainly from the type of data applied. Facilities utilized finance data to determine NHSN location mapping creating limitations. NHSN locations defined by patient populations, associated with baseline risk adjustments to provide comparison performance and impacts CMS metrics. Methods: Decision Support Tool (DST) based on NHSN to evaluate patient populations for acuity, service line data indicating specialty services, financial billing codes, DRG and surgical or non-surgical populations. The DST outlines data elements for review when validating mapped locations. Each tab represents a unique data element to analyze as part of the NHSN decision algorithm (Figure 1). Unit level data aggregates in DST tabs (Figure 2). Implementation consisted of a pilot followed by a regionally phased implementation via coaching calls and follow-up touchpoints. Although all units were reviewed, specific activities focused on non-CMS reportable units such as telemetry, step-down, mixed acuity. As a part of the defined change control process facilities followed an internal standardized workflow to document changes, dates and reasons for change for historical reference (Figure 3). NHSN facility population changes were applied to vendor surveillance software utilized for CDA direct reporting to NHSN. System and facility internal record keeping promotes a standardized process for data validity, associated software maintenance and CMS reporting compliance. Results: The majority of changes were made to units mapped as telemetry with a 62% reduction overall. Figure 4 illustrates the non-CMS reporting locations with notable location mapping changes. Patients in an ‘observation’ status were found to be housed within any inpatient unit and required another data tool for analysis. Overall, the number of mapped 24-hour observation units is low (1.7%) across the healthcare system. Conclusions: The initiative standardized objective data and competency which elevated the trained infection preventionists on this topic. Admission orders offer telemetry for evaluation and treatment requiring continuous cardiac monitoring. NHSN definition is specific requiring 80% of unit patients to have a cardiac centered DRG/care and cardiac specialty treatment to meet telemetry definition. NHSN recommends at minimum annual mapping evaluation. As a large healthcare system, the DST analysis is managed continuously due to growing service lines, acquisitions and construction projects.
Te Papa Tongarewa, the Museum of New Zealand, is a cultural institution located in Aotearoa New Zealand. The museum’s foundational principle of biculturalism appears increasingly inadequate for addressing the fundamental injustices associated with settler/invader colonialism and can be seen as a barrier to achieving a “collective future.” This article argues that Te Papa must discard biculturalism insofar as it does not provide for tino rangatiratanga (self-determination) or mana motuhake (Indigenous sovereignty). Currently, Te Papa promotes Indigenous cultural inclusion and the celebration of Te Ao Māori (The Māori World) within a settler/invader-defined national identity and cultural memory. In the future, a decolonial and tikanga-based (Māori legal and customary practices and system) approach should be implemented at Te Papa.
Background: There is a high prevalence of catheter associated urinary tract infections (CAUTIs) on a hospital cardiology unit, with a rate of 2.48 CAUTIs per 1,000 catheter days over the past two years compared to the national average of 0.96 CAUTIs for similar units. CAUTIs lead to increased lengths of stay, mortality, and hospital expenditures. Per NHSN, the presence of an indwelling urinary catheter (IUC) increases the risk for developing a CAUTI by 3-7% each day an IUC is in place. Method: A process improvement approach was utilized to study the problem of increased CAUTIs and implement a PDSA intervention.
A process map was created to identify opportunities for error that could increase risk for CAUTIs (Figure 1). Contributing factors were explored through developing a driver diagram (Figure 2).
Data was collected to study root causes of CAUTI development and identify opportunities for improvement. 7 nurses were observed placing IUCs in mannequins to assess insertion practices. 19 maintenance audits of IUCs among patients were conducted. Electronic medical record (EMR) data was compiled to assess hospital location of catheter insertion, catheter utilization ratio, indication for insertion, and duration of catheterization. Based on data, team decided to focus PDSA intervention on reducing IUC duration, a process measure for the desired outcome of reducing CAUTIs. Results: EMR baseline data during the period 11/6/2024- 12/29/2024 revealed an average IUC duration of 7.92 days. A SMART(IE) goal was established to reduce the average duration of IUCs on this unit by 15% from 7.92 days to 6.73 days within 4 weeks.
An intervention was developed to incorporate discussion of IUC indication, duration, and eligibility for removal for patients with IUCs during daily multidisciplinary rounds. Unit charge nurses received training on CAUTI prevention, facilitating rounds discussions, and data collection. Intervention is being implemented over the period 12/30/2024- 1/25/2025.
During the pre-intervention period 11/6/2024- 12/29/2024, 70 IUCs were reviewed. In preliminary analysis of the post-intervention period of 12/30/24- 1/15/25, 15 IUCs were reviewed. Preliminary analysis shows the average duration of IUCs per patient decreased by 31%, to an average of 5.47 days (Figure 3). There were 4 IUCs that were removed after discussions at multidisciplinary rounds. Conclusion: Process improvement tools can be utilized to study contributors to CAUTIs and develop unit-level solutions. Preliminary data demonstrates that incorporating review of IUCs during multidisciplinary rounds may reduce average duration of IUC use.
Background: Up to 10% of children have penicillin allergy labels, although, when tested, >95% tolerate penicillin. These labels expose children to increased risks of harm through adulthood. Professional allergy societies recommend the proactive removal of low-risk penicillin allergy labels among children by history alone or following direct oral drug challenges. However, access to subspecialty allergy testing is limited and recent studies have demonstrated that direct oral amoxicillin challenges in low-risk populations can be safely performed in pediatric primary care settings. We aimed to identify prescribers’ attitudes towards penicillin allergy delabeling and barriers and enablers to penicillin allergy delabeling in pediatric primary offices. Method: We conducted a multisite qualitative study consisting of interviews and/or focus groups with 29 primary care prescribers at 10 primary care practices of two health systems in the northeast U.S. We analyzed data using conventional content analysis and grouped barriers and enablers to penicillin allergy delabeling according to the Capability, Opportunity, and Motivation domains of the COM-B Behavior Change Wheel. Results: Prescribers agreed that unnecessary penicillin allergy labels in children should be avoided and shared their experiences delabeling penicillin allergies from history alone and collaborating with parents to trial amoxicillin in children with low-risk penicillin allergies. Predominant barriers among prescribers to penicillin allergy delabeling included insufficient capability (suboptimal knowledge and skills in penicillin allergy delabeling), poor social and environmental opportunity (parent unwillingness to trial penicillin, lack of time, inadequate office space and resources), and poor motivation (tendency to accept reported penicillin allergies due to perception that consequences of penicillin allergy are rare and distant, inherent logistical difficulties to delabel, and lack of reasons to delabel). To facilitate penicillin allergy delabeling, participants recommended the implementation of a protocol and training in penicillin allergy delabeling, interventions to engage parents in delabeling, innovative approaches to address insufficient resources and infrastructures, and amplification of reasons for primary care prescribers to delabel. We provide representative quotes of the barriers and corresponding enablers to penicillin allergy delabeling in pediatric primary care in Table. Conclusion: There is precedent for penicillin allergy delabeling in pediatric primary care. Findings indicate that prescribers are inclined to delabel low-risk penicillin allergies if given the necessary education/training, parent support, resources, and infrastructure.
In the context of self-defence, successive governments have taken an inconsistent approach to using public opinion as a basis for reforming criminal law. In the case of householders acting in self-defence, reform was based on limited public opinion whereas in the case of the domestic abuse victim who uses force against their abuser reform proposals were rejected without considering public opinion. There is a limited evidence base of actual public perceptions in either situation and yet their value is substantial when considering the role of lay decision-makers in the criminal trial and the need to maintain public trust in the system. This paper explores theoretical justifications for the inclusion of public perceptions in the development of criminal defences. Using a social constructivist approach, the authors consider public perceptions, as found in a small-scale empirical study, towards self-defence claims in both a householder and domestic abuse context, concluding that the public can in some circumstances find that the latter is more deserving of a claim than the former.
Decades of research on the dimensional nature of personality disorder have led to the replacement of categorical personality disorder diagnoses by a dimensional assessment of personality disorder severity (PDS) in ICD-11, which essentially corresponds to personality functioning in the alternative DSM-5 model for personality disorders. Besides advancing the focus in the diagnosis of PD on impairments in self- and interpersonal functioning, this shift also urges clinicians and researchers worldwide to get familiar with new diagnostic approaches.
Aims
This study investigated which PDS dimensions among different assessment methods and conceptualisations have the most predictive value for overall PDS.
Method
Using semi-structured interviews and self-reports of personality functioning, personality organisation and personality structure in clinical samples of different settings in Switzerland and Germany (n = 534), we calculated a latent general factor for PDS (g-PDS) by applying a correlated trait correlated (method – 1) model (CTC(M–1)).
Results
Our results showed that four interview-assessed PDS dimensions: defence mechanisms, desire and capacity for closeness, sense of self, and comprehension and appreciation of others’ experiences and motivations account for 91.1% of variance of g-PDS, with a combination of either two of these four dimensions already explaining between 81.8 and 91.3%. Regarding self-reports, the dimensions depth and duration of connections, self-perception, object perception and attachment capacity to internal objects predicted 61.3% of the variance of a latent interview-based score, with all investigated self-reported dimensions together adding up to 65.2% variance explanation.
Conclusions
Taken together, our data suggest that focusing on specific dimensions, such as intimacy and identity, in time-limited settings might be viable in determining PDS efficiently.
Between 1847 and 1876, the textile factory Todos os Santos operated in Bahia. During these almost three decades, it was the largest textile factory in Brazil and came to employ more than four hundred workers. Until recently, many aspects of the factory’s labour force were hidden. There was a hegemonic narrative that all of these workers were free and waged individuals and that their living and working conditions were extremely progressive for the period. Meanwhile, there was a silence about the employment of enslaved people in the institution as well as a lack of in-depth analysis concerning the legally free workers. This article analyses labour at the Todos os Santos factory. On the one hand, it provides evidence on why the myth about the exclusive use of free and waged workers in the factory was formulated and the interests behind this narrative. On the other, through analysis of data from newspapers, philanthropic institutions, and legal and government documents, it reveals the profiles of the supposedly different classes of free and enslaved workers employed at Todos os Santos—men, women, and children of different colours—showing how complex, and often how similar, their living and working conditions were.
Background: Many practitioners relied on SARS-CoV-2 RT-PCR cycle thresholds (Ct) to remove COVID-specific isolation given data correlating Ct values with the ability to culture live virus. Standardly, Ct values of 28-32 were used to remove isolation. However, many labs stopped reporting these values given lack of clinical validation based on a joint IDSA/AMP statement. VA Boston Healthcare System (VABHS) developed and implemented a clinical algorithm to replace Ct values to determine a need for isolation. We aimed to compare our algorithm performance to the unreported Ct results. Methods: We conducted a retrospective cohort study of COVID-19 PCR positive patients at VABHS between 10/1/23 and 3/31/24. During this time, VABHS required COVID-19 PCR testing (either via Cepheid Xpert Xpress CoV-2 plus or Cepheid Xpress Sars-CoV-2/flu/RSV plus) for admission regardless of symptoms. Included were all patients for whom Infectious Diseases (ID) was contacted to take off isolation using our algorithm (Fig 1). Ct values were later obtained from the lab as part of IRB-approved research to determine sensitivity of the algorithm to correctly classify isolation requirements. Ct values of 28 and 30 were used as the gold standard test for determining need for isolation. Results: ID was contacted to determine isolation requirements for 56 patients for whom the algorithm was applied and Ct values were later available for review. Using a Ct threshold of 28, 44 patients (78.6%) were admitted with appropriate isolation classification via the algorithm; 34 patients off isolation and 10 requiring isolation. Incorrect algorithm classification occurred for 10 patients who were isolated when not required due to lack of additional data; 2 patients who required isolation were not isolated. The algorithm failed in these 2 patients at the use of antibody results for determining time from infection; mean Ct value was 25.6 (range 23.6- 27.6). Both patients had COVID within the last 30 days and positive antibody testing. The true positive rate/sensitivity for algorithm driven isolation was 83.3% (51.6-97.9%) and the true negative rate/specificity for algorithm driven removal of isolation was 77.3% (62.1-88.5%). When the Ct threshold is modified to 30, the sensitivity and specificity of the algorithm were 78.6% (49.2-95.3%) and 78.6% (63.2-89.7%) (table 1). No transmissions occurred using the algorithm during this study period. Conclusions: Strategic use of an algorithm using history, antigen and antibody results was moderately accurate compared to Ct values for assessing isolation requirements. No known transmissions occurred with use of the algorithm in lieu of Ct values.
We develop the theory of limits and colimits in $\infty$-categories within the synthetic framework of simplicial homotopy type theory established by Riehl and Shulman. We also show that in this setting, the limit of a family of spaces can be computed as a dependent product.
Background: Calls within the clinical community for revising guidance on the appropriate durations of antibiotic therapy (i.e., shorter is better) and adherence (i.e., no longer advising to always finish a course), reflect important gains in evidence-based prescribing. However, changing medical guidance can have negative public effects (e.g., frustration, distrust, and disengagement) when not communicated in ways that resonate with patients. To inform efforts to effectively communicate evolving evidence on appropriate antibiotic use, we examined US adults’ perceptions and preferences regarding antibiotic durations and adherence. Methods: From March to April 2024, we invited US adults, aged ≥18 years, to an online survey about antibiotics. Question topics included durations of antibiotic therapy, adherence to a prescribed course of antibiotics, and demographic characteristics. Results: Table 1 shows the characteristics of the 1,476 respondents [completion=89%]. Most respondents reported they preferred to take a longer course of antibiotics (≥7 days) than a shorter one (3-5 days) for a bacterial respiratory infection (60.4% vs. 39.5%) and rated longer courses as both safer and more effective (Table 2). In open-text questions, respondents who preferred shorter courses described a general aversion to medication and concerns about side effects and resistance, whereas those who preferred longer courses saw them as familiar and a ‘better safe than sorry’ approach, associating longer durations with greater efficacy. In addition, 88.4% of respondents agreed that ‘it is important to always finish a prescribed course of antibiotics, even if you start to feel better’ and had either been told this by a medical professional (76.3%) or seen this guidance in a public health message (61.2%). Conversely, only 17.5% said they had ever been told they could stop taking antibiotics early. Preference for longer antibiotic courses was associated with older age, trusting their doctor’s advice about antibiotic therapy durations, having been told by their doctor to ‘always finish a course of antibiotics’, less worry about antibiotic side effects, discomfort about potentially being asked by a clinician to stop taking antibiotics when they start to feel better, and perceiving the clinician suggesting that as less competent. Conclusions: Many US adults prefer longer durations of antibiotic therapy for respiratory infections than are likely necessary. Almost all survey respondents believed it important to always finish a course and many were uncomfortable with advice to the contrary. These findings highlight the need for evidence-based communication strategies for aligning US adults’ antibiotic duration and adherence preferences with current guidance.
Background: Invasive fungal diseases (IFDs) are severe infections caused by fungi that can spread throughout the body, particularly in individuals with weakened immune systems. These infections are increasingly challenging, especially those that are resistant to antifungal treatments. IFDs represent an emerging global threat, highlighting the urgent need for increased attention and research. In 2022, the World Health Organization (WHO) released the WHO Fungal Priority Pathogen List (WHO-FPPL), ranking 19 fungal pathogens as critical, high, or medium based on criteria such as incidence, treatment options, and mortality rates. While Candida albicans and Candida auris are well-known and included on the list, it also features four other species of concern (C. glabrata, C. tropicalis, C. parapsilosis, C. krusei) that are not typically listed on disinfectant master labels. This study aims to evaluate the antimicrobial susceptibility differences between Candida species from commonly used healthcare disinfectant products. Method: Antimicrobial efficacy testing was conducted using common healthcare disinfectants against the WHO-FPPL listed Candida species, following standard operating procedures typically required for disinfectant product registration by the U.S. EPA. Each disinfectant formulation represented a common active ingredient, or active ingredient blend, used in healthcare settings for surface disinfection, ranging from ready-to-use sprays to wipes. Products were tested at approximately 75% of the manufacturer-defined contact time listed on the EPA master label to stress the chemistry and elucidate antimicrobial susceptibility differences between Candida species. Results: Antimicrobial efficacy varied across Candida species for the tested chemistries. Of the six Candida species tested, C. parapsilosis was the most difficult to eradicate. The remaining Candida species exhibited less variability with C.auris and C. krusei demonstrating slightly lower susceptibility across all of the disinfectant types than C.albicans, C. glabrata, and C. tropicalis. Conclusion: This study underscores the variability in efficacy between these emerging fungal pathogens and common healthcare disinfectants. While C. auris remains a primary concern in healthcare settings, these findings highlight the continued need for ongoing fungal surveillance, research of emerging fungi, and the potential impact on environmental hygiene practices.
Background: Empiric antibiotic therapy choices and de-escalation practices for the management of febrile neutropenia (FN) can vary. Facility-specific antimicrobial guidelines have an important role in influencing prescription practices for FN and is a foundation of antimicrobial stewardship activities. Methods: This pre-post quality improvement study at the University of Maryland Medical Center (UMMC) Greenebaum Comprehensive Cancer Center evaluated the impact of the implementation of updated institutional FN guidelines. The changes primarily included: 1) removal of meropenem as first-line agent for patients receiving levofloxacin prophylaxis without other risk factors (e.g. history of resistant organism) and 2) de-escalation protocol for low-risk patients (e.g. afebrile, hemodynamically stable). Education of oncology attendings, residents and pharmacists were carried out. We included patients receiving antipseudomonal antibiotics for FN or sepsis as indicated by prescriber (~70% concordance with antimicrobial stewardship review). Sepsis was included because of high rates of observed misclassification for patients with FN. Stem cell transplant patients were excluded. Pre-intervention (04/2021 – 12/2022) and post-intervention (01/2023 – 09/2024) groups were compared for total anti-pseudomonal antibiotic and meropenem-specific days of therapy (DOT) per 1000 days present (DP) and count of unique antibiotic order per 1000 DP. In addition, a sample of antibiotics reviewed by the UMMC antimicrobial stewardship team was assessed for guideline compliance. Means were calculated across quarters for each period and Willcoxon rank sum was used for comparisons (p Results: A total of 3,311 antibiotics were ordered for FN (79%) or sepsis (21%) during the study period. Longitudinal trends and antibiotic type distribution are illustrated in Figures 1 and 2, respectively. DOT per 1000 DP for all antipseudomonal antibiotics was 213 in the pre-intervention group compared to 191 in the post-intervention group (p=0.06). Meropenem DOT per 1000 DP decreased from 105 in the pre-intervention group to 87 in the post-intervention group (p=0.004). Unique antibiotic order per 1000 DP of all antipseudomonal antibiotics remained constant (62 vs. 56, p=0.1), while unique antibiotic order per 1000 DP for meropenem decreased (16 vs. 8, p =0.01). Of the 317 antibiotics reviewed, 130/169 (77%) were guideline compliant in the pre-intervention group and 113/148 (76%) in the post-intervention group. Conclusion: Changes in FN guidelines at the UMMC cancer center led to decreased meropenem use with a nonsignificant decline in all antipseudomonal antibiotics. Additional work is needed to identify barriers to guideline adherence.
Background: Social vulnerability factors have been associated with negative health outcomes. However, it remains unclear how they affect device-related infections in different population groups. Methods: This retrospective observational cohort study included Central-Line Associated Bloodstream Infections (CLABSI) and Catheter Associated Urinary Tract Infections (CAUTI) in an 850-bed, academic tertiary care facility. Information was collected on patient demographics, the CDC Social Vulnerability Index (SVI), hospitalization, comorbidities, and COVID-19 status. SVI analysis included overall vulnerability comprised of the four themes: socioeconomic status, household characteristics, racial/ethnicity minority status, and housing type/transportation. Chi-square and Wilcoxon rank-sum tests were used for categorical and continuous variable comparisons. GEE models compared pre- and pandemic periods by interrupted time series analysis. Results: Between 1/1/2018 to 5/31/2022 98,791 patients were admitted 151,550 times. Of those, 17,796 patients received 29,483 central lines and 45,180 patients had 65,422 Foleys. 314 patients developed 338 CLABSI and 216 patients had 217 CAUTI. 1,552 patients tested positive for COVID-19 with 22 developing CLABSI and 14 CAUTI. The pre-pandemic downward trend in CLABSI and CAUTI was reversed during COVID-19 (p Throughout the study Black patients had higher device days (p In the SVI analysis the socioeconomic theme was associated with higher risk of device-related CLABSI across the entire study (p=0.03). During COVID-19 overall SVI and the household characteristics theme were associated with higher device-related CLABSI rates (p=0.03; p=0.03). Adjusting for race or ethnicity dissolved those associations. For CAUTI race/ethnicity minority status was linked to an event throughout the study (p=0.03). This held true after adjusting for individual race or ethnicity status. No associations were detected in the pre- and pandemic periods for CAUTI. Conclusions: Health outcome disparities affected Black (CLABSI and CAUTI) and Hispanic/Latino (CLABSI) patients. Of note, both groups had significantly higher device utilization rates. Per-patient infections increased during the pandemic without altering race/ethnicity differences. Higher race/ethnicity minority status SVI was linked to CAUTI. However, CLABSI were driven by the socioeconomic SVI. The findings can help clarify the relationships between race/ethnicity and other demographic and socioeconomic factors associated with device-related infections on the community and individual level.
Background: Multidrug-resistant organisms (MDRO), including those producing extended-spectrum beta lactamases (ESBL) are increasing. Infections, especially with MDRO, lead to increased healthcare costs. Bloodstream infections (BSI) caused by ESBL-producing Klebsiella pneumoniae increased at our institution in 2021-2022. An assessment as to whether these organisms were acquired during hospitalization or other healthcare exposures was conducted. Methods: The rates of ESBL-producing Klebsiella BSI per 1000 hospitalizations in 2021 and 2022 in transplant recipients and non-transplant patients were compared. From 1/1/2021 to 06/30/2023, 49 adult patients at an academic medical center had Klebsiella pneumoniae BSI with ceftriaxone resistance and a high probability of carrying an ESBL. Of these, 20 were transplant recipients and 29 were non-transplant patients. We performed whole-genome sequencing on the 28 available unique patient isolates (12 transplant; 16 non-transplant) to assess relatedness. Additional data were collected by chart review on transplant recipients. BugSeq bioinformatics pipeline and refMLST were utilized on the sequenced isolates to assess clonality, defined as ≤20 allele difference between two isolates. Results: The rate of ESBL-producing Klebsiella BSI increased universally from 2021 to 2022 but impacted the transplant cohort (0.3 to 3.6 per 1000 hospitalizations) more than the non-transplant cohort (0.2 to 0.5 per 1000 hospitalizations). Kidney and liver transplants were most often involved (5 out of 49 patients each). In the transplant cohort, bacteremia alone (45%, 9 out of 20) and urinary source (35%, 7 out of 20) were the most frequently identified etiologies. The most common sequence type was ST-307, accounting for 64% of total sequenced isolates (67% of transplant; 63% of non-transplant). The most common ESBL gene identified was blaCTX-M-15, identified in 24 isolates (86%). Less common resistance genes included blaSHV-12 (N=3) and blaCTX-M-3 (N=1). While there were 5 isolates within 20 allele differences (3 transplant; 2 non-transplant), they were separated in time and did not have obvious epidemiologic connections. While there was a change in TMP-SMX prophylaxis protocol during this time in kidney transplant recipients, it did not explain the increase observed in other transplant groups. Conclusions: There was a sharp increase in the number of BSI caused by ESBL-producing Klebsiella pneumoniae in the transplant population between 2021 and 2022. A molecular epidemiologic analysis ruled out clonal transmission from breakdown of infection prevention practices as the cause. No other common epidemiologic link was identified. This demonstrates the application of whole-genome sequencing in excluding a clonal outbreak from a common source within an institution.
Background: Inappropriate urine culture can lead to unnecessary antibiotic use, antimicrobial resistance, increased healthcare costs, and resource strain. Ensuring the appropriate use of urine cultures aligns with principles of diagnostic stewardship. Methods: Urine cultures ordered from ED in our hospital, for patients who were admitted during July and August 2024 were retrieved from the electronic medical records. Symptoms score based on IDSA guideline (Figure 1) and BLADDER score (Figure 2) were correlated with urine analysis (URE) and cultures for appropriateness. Results: Among 267 urine culture orders that were reviewed, 61 patients were excluded due to indwelling catheter, high-risk neutropenia, recent urological procedures, pregnancy, or recent renal transplantation. The median age of study population (n=206) was 64 years. 50.50% were women. 97 (47.3%) had significant pyuria, and 105 (50.97%) had a positive leukocyte esterase (LE), nitrite positivity was low 13 (6.3%). LE had better correlation with pyuria and culture positivity when compared to urine nitrites. Only 46 patients (22.3%) had culture positivity. Imaging evidence supportive of urinary tract infection was noted in 18 patients. Among 206, only 102 cultures (50.48%) were appropriate as per IDSA guidelines. Inappropriate cultures were ordered for fever (59.6%) without localisation, abdominal discomfort (8.6%), urinary frequency (2.8%), haematuria (1.9%), incontinence (0.9%). 10% were sent as part of order sets, who were asymptomatic and had no significant pyuria or cultures positivity. Among 87 patients with a BLADDER score ≥2, 95.4% of cultures were appropriate, 64.3% had significant pyuria, 36.8% had culture positivity. Among 119 patients with a score < 2, 15.9% of cultures were appropriate, 34.5% had significant pyuria, 11.8% had culture positivity. Positive predictive value (PPV) of BLADDER score for UTI was 77.0%, 89.3% along with pyuria and 88.23 % when combined with pyuria and positive LE. Negative predictive value (NPV) of BLADDER score for UTI was 88.2%, 100% along with absence of pyuria and 100% when combined with absence of pyuria and negative LE (Table 1). Based on our study the proposed algorithm for ordering urine culture, after excluding the high risk group is depicted in the Figure 3. Conclusion: Our study showed 50% of urine culture as inappropriate. BLADDER score can be a useful bedside screening tool for deciding urine culture, PPV and NPV increase when combined with presence or absence of pyuria and LE. Implementing a diagnostic stewardship protocol in urine culture has the potential to improve culture appropriateness, reduce unnecessary antibiotic use.