We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Surgical-site infections (SSIs) can be catastrophic. Bundles of evidence-based practices can reduce SSIs but can be difficult to implement and sustain.
Objective:
We sought to understand the implementation of SSI prevention bundles in 6 US hospitals.
Design:
Qualitative study.
Methods:
We conducted in-depth semistructured interviews with personnel involved in bundle implementation and conducted a thematic analysis of the transcripts.
Setting:
The study was conducted in 6 US hospitals: 2 academic tertiary-care hospitals, 3 academic-affiliated community hospitals, 1 unaffiliated community hospital.
Participants:
In total, 30 hospital personnel participated. Participants included surgeons, laboratory directors, clinical personnel, and infection preventionists.
Results:
Bundle complexity impeded implementation. Other barriers varied across services, even within the same hospital. Multiple strategies were needed, and successful strategies in one service did not always apply in other areas. However, early and sustained interprofessional collaboration facilitated implementation.
Conclusions:
The evidence-based SSI bundle is complicated and can be difficult to implement. One implementation process probably will not work for all settings. Multiple strategies were needed to overcome contextual and implementation barriers that varied by setting and implementation climate. Appropriate adaptations for specific settings and populations may improve bundle adoption, fidelity, acceptability, and sustainability.
This document introduces and explains common implementation concepts and frameworks relevant to healthcare epidemiology and infection prevention and control and can serve as a stand-alone guide or be paired with the “SHEA/IDSA/APIC Compendium of Strategies to Prevent Healthcare-Associated Infections in Acute Care Hospitals: 2022 Updates,” which contain technical implementation guidance for specific healthcare-associated infections. This Compendium article focuses on broad behavioral and socio-adaptive concepts and suggests ways that infection prevention and control teams, healthcare epidemiologists, infection preventionists, and specialty groups may utilize them to deliver high-quality care. Implementation concepts, frameworks, and models can help bridge the “knowing-doing” gap, a term used to describe why practices in healthcare may diverge from those recommended according to evidence. It aims to guide the reader to think about implementation and to find resources suited for a specific setting and circumstances by describing strategies for implementation, including determinants and measurement, as well as the conceptual models and frameworks: 4Es, Behavior Change Wheel, CUSP, European and Mixed Methods, Getting to Outcomes, Model for Improvement, RE-AIM, REP, and Theoretical Domains.
The intent of this document is to highlight practical recommendations in a concise format designed to assist acute-care hospitals in implementing and prioritizing their surgical-site infection (SSI) prevention efforts. This document updates the Strategies to Prevent Surgical Site Infections in Acute Care Hospitals published in 2014.1 This expert guidance document is sponsored by the Society for Healthcare Epidemiology of America (SHEA). It is the product of a collaborative effort led by SHEA, the Infectious Diseases Society of America (IDSA), the Association for Professionals in Infection Control and Epidemiology (APIC), the American Hospital Association (AHA), and The Joint Commission, with major contributions from representatives of a number of organizations and societies with content expertise.
In total, 50 healthcare facilities completed a survey in 2021 to characterize changes in infection prevention and control and antibiotic stewardship practices. Notable findings include sustained surveillance for multidrug-resistant organisms but decreased use of human resource-intensive interventions compared to previous surveys in 2013 and 2018 conducted prior to the COVID-19 pandemic.
The Intensity Interferometry technique consists of measuring the spatial coherence (visibility) of an object via its intensity fluctuations over a sufficient range of telescope separations (baselines). This allows us to study the size, shape and morphology of stars with an unprecedented resolution. Cherenkov telescopes have a set of characteristics that coincidentally allow for Intensity Interferometry observations: very large reflective surfaces, sensitivity to individual photons, temporal resolution of nanoseconds and the fact that they come in groups of several telescopes. In the recent years, the MAGIC Collaboration has developed a deadtime-free Intensity Interferometry setup for its two 17 m diameter Cherenkov telescopes that includes a 4-channel GPU-based real-time correlator, 410–430 nm filters and new ways of splitting its primary mirrors into submirrors using Active Mirror Control (AMC). With this setup, MAGIC can operate as a long-baseline optical interferometer in the baseline range 40–90 m, which translates into angular resolutions of 0.5-1 mas. Additionally, thanks to its AMC, it can simultaneously measure the zero-baseline correlation or, by splitting into submirrors, access shorter baselines under 17 m in multiple u-v plane orientations. The best candidates to observe with this technique are relatively small and bright stars, in other words, massive stars (O, B and A types). We will present the science cases that are currently being proposed for this setup, as well as the prospects for the future of the system and technique, like the possibility of large-scale implementation with CTA.
Response to lithium in patients with bipolar disorder is associated with clinical and transdiagnostic genetic factors. The predictive combination of these variables might help clinicians better predict which patients will respond to lithium treatment.
Aims
To use a combination of transdiagnostic genetic and clinical factors to predict lithium response in patients with bipolar disorder.
Method
This study utilised genetic and clinical data (n = 1034) collected as part of the International Consortium on Lithium Genetics (ConLi+Gen) project. Polygenic risk scores (PRS) were computed for schizophrenia and major depressive disorder, and then combined with clinical variables using a cross-validated machine-learning regression approach. Unimodal, multimodal and genetically stratified models were trained and validated using ridge, elastic net and random forest regression on 692 patients with bipolar disorder from ten study sites using leave-site-out cross-validation. All models were then tested on an independent test set of 342 patients. The best performing models were then tested in a classification framework.
Results
The best performing linear model explained 5.1% (P = 0.0001) of variance in lithium response and was composed of clinical variables, PRS variables and interaction terms between them. The best performing non-linear model used only clinical variables and explained 8.1% (P = 0.0001) of variance in lithium response. A priori genomic stratification improved non-linear model performance to 13.7% (P = 0.0001) and improved the binary classification of lithium response. This model stratified patients based on their meta-polygenic loadings for major depressive disorder and schizophrenia and was then trained using clinical data.
Conclusions
Using PRS to first stratify patients genetically and then train machine-learning models with clinical predictors led to large improvements in lithium response prediction. When used with other PRS and biological markers in the future this approach may help inform which patients are most likely to respond to lithium treatment.
Ceftazidime/avibactam (C/A), ceftolozane/tazobactam (C/T), imipenem/relebactam (I/R), and meropenem/vaborbactam (M/V) combine either a cephalosporin (C/T and C/A) or a carbapenem antibiotic (M/V and I/R) with a β-lactamase inhibitor. They are used to treat carbapenem-resistant Enterobacterales (CRE) and/or multidrug-resistant Pseudomonas aeruginosa (MDRPA).
Objective:
We compared the pooled clinical success of these medications to older therapies.
Methods:
PubMed and EMBASE were searched from January 1, 2012, through September 2, 2020, for C/A, C/T, I/R, and M/V studies. The main outcome was clinical success, which was assessed using random-effects models. Stratified analyses were conducted for study drug, sample size, quality, infection source, study design, and multidrug-resistant gram-negative organism (MDRGNO) population. Microbiological success and 28- and 30-day mortality were assessed as secondary outcomes. Heterogeneity was determined using I2 values.
Results:
Overall, 25 articles met the inclusion criteria; 8 observational studies and 17 randomized control trials. We detected no difference in clinical success comparing new combination antibiotics with standard therapies for all included organisms (pooled OR, 1.21; 95% CI, 0.96–1.51). We detected a moderate level of heterogeneity among the included studies I2 = 56%. Studies that focused on patients with CRE or MDRPA infections demonstrated a strong association between treatment with new combination antibiotics and clinical success (pooled OR, 2.20; 95% CI, 1.60–3.57).
Conclusions:
C/T, C/A, I/R, and M/V are not inferior to standard therapies for treating various complicated infections, but they may have greater clinical success for treating MDRPA and CRE infections. More studies that evaluate the use of these antibiotics for drug-resistant infections are needed to determine their effectiveness.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
Aims
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Method
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Results
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
Conclusions
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
To develop a fully automated algorithm using data from the Veterans’ Affairs (VA) electrical medical record (EMR) to identify deep-incisional surgical site infections (SSIs) after cardiac surgeries and total joint arthroplasties (TJAs) to be used for research studies.
Design:
Retrospective cohort study.
Setting:
This study was conducted in 11 VA hospitals.
Participants:
Patients who underwent coronary artery bypass grafting or valve replacement between January 1, 2010, and March 31, 2018 (cardiac cohort) and patients who underwent total hip arthroplasty or total knee arthroplasty between January 1, 2007, and March 31, 2018 (TJA cohort).
Methods:
Relevant clinical information and administrative code data were extracted from the EMR. The outcomes of interest were mediastinitis, endocarditis, or deep-incisional or organ-space SSI within 30 days after surgery. Multiple logistic regression analysis with a repeated regular bootstrap procedure was used to select variables and to assign points in the models. Sensitivities, specificities, positive predictive values (PPVs) and negative predictive values were calculated with comparison to outcomes collected by the Veterans’ Affairs Surgical Quality Improvement Program (VASQIP).
Results:
Overall, 49 (0.5%) of the 13,341 cardiac surgeries were classified as mediastinitis or endocarditis, and 83 (0.6%) of the 12,992 TJAs were classified as deep-incisional or organ-space SSIs. With at least 60% sensitivity, the PPVs of the SSI detection algorithms after cardiac surgeries and TJAs were 52.5% and 62.0%, respectively.
Conclusions:
Considering the low prevalence rate of SSIs, our algorithms were successful in identifying a majority of patients with a true SSI while simultaneously reducing false-positive cases. As a next step, validation of these algorithms in different hospital systems with EMR will be needed.
Clostridioides difficile infection (CDI) is the most frequently reported hospital-acquired infection in the United States. Bioaerosols generated during toilet flushing are a possible mechanism for the spread of this pathogen in clinical settings.
Objective:
To measure the bioaerosol concentration from toilets of patients with CDI before and after flushing.
Design:
In this pilot study, bioaerosols were collected 0.15 m, 0.5 m, and 1.0 m from the rims of the toilets in the bathrooms of hospitalized patients with CDI. Inhibitory, selective media were used to detect C. difficile and other facultative anaerobes. Room air was collected continuously for 20 minutes with a bioaerosol sampler before and after toilet flushing. Wilcoxon rank-sum tests were used to assess the difference in bioaerosol production before and after flushing.
Setting:
Rooms of patients with CDI at University of Iowa Hospitals and Clinics.
Results:
Bacteria were positively cultured from 8 of 24 rooms (33%). In total, 72 preflush and 72 postflush samples were collected; 9 of the preflush samples (13%) and 19 of the postflush samples (26%) were culture positive for healthcare-associated bacteria. The predominant species cultured were Enterococcus faecalis, E. faecium, and C. difficile. Compared to the preflush samples, the postflush samples showed significant increases in the concentrations of the 2 large particle-size categories: 5.0 µm (P = .0095) and 10.0 µm (P = .0082).
Conclusions:
Bioaerosols produced by toilet flushing potentially contribute to hospital environmental contamination. Prevention measures (eg, toilet lids) should be evaluated as interventions to prevent toilet-associated environmental contamination in clinical settings.
We used a survey to characterize contemporary infection prevention and antibiotic stewardship program practices across 64 healthcare facilities, and we compared these findings to those of a similar 2013 survey. Notable findings include decreased frequency of active surveillance for methicillin-resistant Staphylococcus aureus, frequent active surveillance for carbapenem-resistant Enterobacteriaceae, and increased support for antibiotic stewardship programs.
To ascertain opinions regarding etiology and preventability of hospital-onset bacteremia and fungemia (HOB) and perspectives on HOB as a potential outcome measure reflecting quality of infection prevention and hospital care.
Design:
Cross-sectional survey.
Participants:
Hospital epidemiologists and infection preventionist members of the Society for Healthcare Epidemiology of America (SHEA) Research Network.
Methods:
A web-based, multiple-choice survey was administered via the SHEA Research Network to 133 hospitals.
Results:
A total of 89 surveys were completed (67% response rate). Overall, 60% of respondents defined HOB as a positive blood culture on or after hospital day 3. Central line-associated bloodstream infections and intra-abdominal infections were perceived as the most frequent etiologies. Moreover, 61% thought that most HOB events are preventable, and 54% viewed HOB as a measure reflecting a hospital’s quality of care. Also, 29% of respondents’ hospitals already collect HOB data for internal purposes. Given a choice to publicly report central-line–associated bloodstream infections (CLABSIs) and/or HOB, 57% favored reporting either HOB alone (22%) or in addition to CLABSI (35%) and 34% favored CLABSI alone.
Conclusions:
Among the majority of SHEA Research Network respondents, HOB is perceived as preventable, reflective of quality of care, and potentially acceptable as a publicly reported quality metric. Further studies on HOB are needed, including validation as a quality measure, assessment of risk adjustment, and formation of evidence-based bundles and toolkits to facilitate measurement and improvement of HOB rates.
Childhood adversity (CA) increases the risk of subsequent mental health problems. Adolescent social support (from family and/or friends) reduces the risk of mental health problems after CA. However, the mechanisms of this effect remain unclear, and we speculate that they are manifested on neurodevelopmental levels. Therefore, we investigated whether family and/or friendship support at ages 14 and 17 function as intermediate variables for the relationship between CA before age 11 and affective or neural responses to social rejection feedback at age 18. We studied 55 adolescents with normative mental health at age 18 (26 with CA and therefore considered “resilient”), from a longitudinal cohort. Participants underwent a Social Feedback Task in the magnetic resonance imaging scanner. Social rejection feedback activated the dorsal anterior cingulate cortex and the left anterior insula. CA did not predict affective or neural responses to social rejection at age 18. Yet, CA predicted better friendships at age 14 and age 18, when adolescents with and without CA had comparable mood levels. Thus, adolescents with CA and normative mood levels have more adolescent friendship support and seem to have normal mood and neural responses to social rejection.
A study on locally available composts in Austria, Germany, Italy and Switzerland was conducted to investigate the potential of these non-chemical based tools to increase soil health in orchards afflicted by apple replant disease (ARD). A total of 26 different composts (six to seven per country) were chosen for the study. Composts were divided into ten types according to the waste materials used as substrates in the composting process. Growth reduction is the main symptom associated with replant disease; therefore compost performance was evaluated based on the growth responses of apple rootstock plantlets in compost-amended soils in pots. These greenhouse trials were performed in one research station per country, located in an intensive apple-growing area, and soil was taken from an apple orchard affected by replanting disease. Plant growth response was measured as shoot elongation at the end of each greenhouse trial, and results showed increases in growth compared with the respective controls of 2–26% in 20 out of 26 composts evaluated. The heterogeneous nature of the composts most likely attributed to the finding that similar compost types originating from the different countries had varying effects on plant growth. Overall, no significant changes in chemical and biological properties were observed in amended soils as compared with non-amended controls. The high soil resilience was in part expected given the good organic matter content in the original soils (>2%). The bacterial communities of the composts were investigated using the COMPOCHIP microarray, and analyses showed that differences in plant growth response were mainly attributed to the microbial changes introduced into the soil through composts rather than to changes in soil chemical and biological parameters. However, the bacterial communities of composts appeared to be more influenced by geographical origin than by compost type. The results have shown that soil amendment with composts generated from locally produced wastes have the potential to reduce the effects of ARD, although the effects appear to be both compost and soil specific.
Responses of weeds and sugarbeets (Beta vulgaris L.) to postemergence treatments of methyl m-hydroxycarbanilate m-methylcarbanilate (phenmedipham) and two analogues were evaluated in six field studies. Phenmedipham at 1.7 kg/ha controlled foxtail millet (Setaria italica (L.) Beauv.) and kochia (Kochia scoparia (L.) Schrad.) better than 2.2 kg/ha of methyl m-hydroxycarbanilate carbanilate and ethyl m-hydroxycarbanilate carbanilate. Pigweed (Amaranthus spp.) was controlled better by the analogues at 1.1 kg/ha than by phenmedipham. The foliar growth of sugarbeets was generally suppressed more by the analogues than by phenmedipham, but injury was not considered detrimental at 1.1 kg/ha. Yield of sugarbeet roots and sugar was reduced by 7% or less by phenmedipham at rates of 1.1 to 4.5 kg/ha, but these yield reductions were associated primarily with the failure of phenmedipham to completely control all weeds for 5 to 9 weeks after treatment.
In a 2-year study, five densities of kochia (Kochia scoparia (L.) Schrad.) competed with sugarbeets (Beta vulgaris L.) for the entire growing season. The yield of sugarbeet roots and pounds of sucrose per acre decreased as the intensity of competition from kochia plants increased. One kochia plant per 25 ft of row reduced the average yield of roots by 2.6 T/A and sucrose by 960 lb/A. A density of one kochia plant, or more, per 2 ft of row in the first year, or one kochia plant per 1 ft of row in the second year, significantly reduced the sucrose content of the roots. Individual kochia plants significantly reduced the weight of sugarbeet tops and roots within a radius of 31 inches from the center of the sugarbeet row.
In a 2-year study, kochia [Kochia scoparia (L.) Schrad.] competed with sugarbeets (Beta vulgaris L.) for various periods after emergence. Kochia then was removed and plants that emerged subsequently were controlled by hand-weeding. Yield of sugarbeet roots was reduced when kochia competed for more than 5 or 6 weeks. Kochia that competed with sugarbeets for the entire season reduced yields more than 95%. In another study, kochia was controlled by frequent hand-weeding for different periods after sugarbeet emergence. When kochia was controlled initially for 3 to 4 weeks, yield of sugarbeets was not reduced. Some kochia emerged following the 3 to 4 weeks of hand-weeding. However, competition from sugarbeets reduced the weight of this kochia at harvest by 59 and 92%.
The performance of four sequential weed management models that assumed either low or high risk was compared to the performance of two sugarbeet consultants, one who assumed low risk and the other high risk. Weed management recommendations were performed over one growing season at two locations, each with several levels of weed populations. Recommendations for preplant, postemergence, and layby herbicide treatments or late-season handweeding differed among the four weed management levels. The high-risk management level was labor intensive and the low-risk management level was herbicide intensive. Weed populations at harvest, recoverable sucrose, and net return above weed control costs were not different among the four weed management levels. Weeds can be controlled in sugarbeets by employing weed management practices based on bioeconomic modeling.
The question of when to control weeds traditionally has been approached with the calculation of critical periods (CP) based on crop yields. The concept of economic critical period (ECP) and early (EEPT) and late (LEFT) economic period thresholds are presented as a comprehensive approach to answer the same question based on economic losses and costs of control. ECP is defined as the period when the benefit of controlling weeds is greater than its cost. EEPT and LEFT are the limits of the ECP and can be used to determine when first and last weed control measures should be performed. Calculation of EEPT accounts for the economic losses due to weed competition that occur between planting and postemergence weed control. In this way it is possible to better evaluate the economic feasibility of using preplant or preemergence control tactics. The EEPT for DCPA application is analyzed in the context of onion production in Colorado. The EEPT for DCPA application was calculated from an empirical regression model that assessed the impact of weed load and time of weed removal on onion yields. The EEPT was affected by control efficacy, weed-free yield, DCPA cost, and onion price. DCPA application was economically advisable in only one of 20 fields analyzed because of the tow DCPA efficacy (60%).