We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The stars of the Milky Way carry the chemical history of our Galaxy in their atmospheres as they journey through its vast expanse. Like barcodes, we can extract the chemical fingerprints of stars from high-resolution spectroscopy. The fourth data release (DR4) of the Galactic Archaeology with HERMES (GALAH) Survey, based on a decade of observations, provides the chemical abundances of up to 32 elements for 917 588 stars that also have exquisite astrometric data from the Gaia satellite. For the first time, these elements include life-essential nitrogen to complement carbon, and oxygen as well as more measurements of rare-earth elements critical to modern-life electronics, offering unparalleled insights into the chemical composition of the Milky Way. For this release, we use neural networks to simultaneously fit stellar parameters and abundances across the whole wavelength range, leveraging synthetic grids computed with Spectroscopy Made Easy. These grids account for atomic line formation in non-local thermodynamic equilibrium for 14 elements. In a two-iteration process, we first fit stellar labels to all 1 085 520 spectra, then co-add repeated observations and refine these labels using astrometric data from Gaia and 2MASS photometry, improving the accuracy and precision of stellar parameters and abundances. Our validation thoroughly assesses the reliability of spectroscopic measurements and highlights key caveats. GALAH DR4 represents yet another milestone in Galactic archaeology, combining detailed chemical compositions from multiple nucleosynthetic channels with kinematic information and age estimates. The resulting dataset, covering nearly a million stars, opens new avenues for understanding not only the chemical and dynamical history of the Milky Way but also the broader questions of the origin of elements and the evolution of planets, stars, and galaxies.
Underrepresentation of diverse populations in medical research undermines generalizability, exacerbates health disparities, and erodes trust in research institutions. This study aimed to identify a suitable survey instrument to measure trust in medical research among Black and Latino communities in Baltimore, Maryland.
Methods:
Based on a literature review, a committee selected two validated instruments for community evaluation: Perceptions of Research Trustworthiness (PoRT) and Trust in Medical Researchers (TiMRs). Both were translated into Spanish through a standardized process. Thirty-four individuals participated in four focus groups (two in English, two in Spanish). Participants reviewed and provided feedback on the instruments’ relevance and clarity. Discussions were recorded, transcribed, and analyzed thematically.
Results:
Initial reactions to the instruments were mixed. While 68% found TiMR easier to complete, 74% preferred PoRT. Key discussion themes included the relevance of the instrument for measuring trust, clarity of the questions, and concerns about reinforcing negative perceptions of research. Participants felt that PoRT better aligned with the research goal of measuring community trust in research, though TiMR was seen as easier to understand. Despite PoRT’s lower reading level, some items were found to be more confusing than TiMR items.
Conclusion:
Community feedback highlighted the need to differentiate trust in medical research, researchers, and institutions. While PoRT and TiMR are acceptable instruments for measuring trust in medical research, refinement of both may be beneficial. Development and validation of instruments in multiple languages is needed to assess community trust in research and inform strategies to improve diverse participation in research.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
Diagnostic criteria for major depressive disorder allow for heterogeneous symptom profiles but genetic analysis of major depressive symptoms has the potential to identify clinical and etiological subtypes. There are several challenges to integrating symptom data from genetically informative cohorts, such as sample size differences between clinical and community cohorts and various patterns of missing data.
Methods
We conducted genome-wide association studies of major depressive symptoms in three cohorts that were enriched for participants with a diagnosis of depression (Psychiatric Genomics Consortium, Australian Genetics of Depression Study, Generation Scotland) and three community cohorts who were not recruited on the basis of diagnosis (Avon Longitudinal Study of Parents and Children, Estonian Biobank, and UK Biobank). We fit a series of confirmatory factor models with factors that accounted for how symptom data was sampled and then compared alternative models with different symptom factors.
Results
The best fitting model had a distinct factor for Appetite/Weight symptoms and an additional measurement factor that accounted for the skip-structure in community cohorts (use of Depression and Anhedonia as gating symptoms).
Conclusion
The results show the importance of assessing the directionality of symptoms (such as hypersomnia versus insomnia) and of accounting for study and measurement design when meta-analyzing genetic association data.
The origins and timing of inpatient room sink contamination with carbapenem-resistant organisms (CROs) are poorly understood.
Methods:
We performed a prospective observational study to describe the timing, rate, and frequency of CRO contamination of in-room handwashing sinks in 2 intensive care units (ICU) in a newly constructed hospital bed tower. Study units, A and B, were opened to patient care in succession. The patients in unit A were moved to a new unit in the same bed tower, unit B. Each unit was similarly designed with 26 rooms and in-room sinks. Microbiological samples were taken every 4 weeks from 3 locations from each study sink: the top of the bowl, the drain cover, and the p-trap. The primary outcome was sink conversion events (SCEs), defined as CRO contamination of a sink in which CRO had not previously been detected.
Results:
Sink samples were obtained 22 times from September 2020 to June 2022, giving 1,638 total environmental cultures. In total, 2,814 patients were admitted to study units while sink sampling occurred. We observed 35 SCEs (73%) overall; 9 sinks (41%) in unit A became contaminated with CRO by month 10, and all 26 sinks became contaminated in unit B by month 7. Overall, 299 CRO isolates were recovered; the most common species were Enterobacter cloacae and Pseudomonas aeruginosa.
Conclusion:
CRO contamination of sinks in 2 newly constructed ICUs was rapid and cumulative. Our findings support in-room sinks as reservoirs of CRO and emphasize the need for prevention strategies to mitigate contamination of hands and surfaces from CRO-colonized sinks.
Various water-based heater-cooler devices (HCDs) have been implicated in nontuberculous mycobacteria outbreaks. Ongoing rigorous surveillance for healthcare-associated M. abscessus (HA-Mab) put in place following a prior institutional outbreak of M. abscessus alerted investigators to a cluster of 3 extrapulmonary M. abscessus infections among patients who had undergone cardiothoracic surgery.
Methods:
Investigators convened a multidisciplinary team and launched a comprehensive investigation to identify potential sources of M. abscessus in the healthcare setting. Adherence to tap water avoidance protocols during patient care and HCD cleaning, disinfection, and maintenance practices were reviewed. Relevant environmental samples were obtained. Patient and environmental M. abscessus isolates were compared using multilocus-sequence typing and pulsed-field gel electrophoresis. Smoke testing was performed to evaluate the potential for aerosol generation and dispersion during HCD use. The entire HCD fleet was replaced to mitigate continued transmission.
Results:
Clinical presentations of case patients and epidemiologic data supported intraoperative acquisition. M. abscessus was isolated from HCDs used on patients and molecular comparison with patient isolates demonstrated clonality. Smoke testing simulated aerosolization of M. abscessus from HCDs during device operation. Because the HCD fleet was replaced, no additional extrapulmonary HA-Mab infections due to the unique clone identified in this cluster have been detected.
Conclusions:
Despite adhering to HCD cleaning and disinfection strategies beyond manufacturer instructions for use, HCDs became colonized with and ultimately transmitted M. abscessus to 3 patients. Design modifications to better contain aerosols or filter exhaust during device operation are needed to prevent NTM transmission events from water-based HCDs.
This retrospective review of 4-year surveillance data revealed a higher central line-associated bloodstream infection (CLABSI) rate in non-Hispanic Black patients and higher catheter-associated urinary tract infection (CAUTI) rates in Asian and non-Hispanic Black patients compared with White patients despite similar catheter utilization between the groups.
Urine cultures collected from catheterized patients have a high likelihood of false-positive results due to colonization. We examined the impact of a clinical decision support (CDS) tool that includes catheter information on test utilization and patient-level outcomes.
Methods:
This before-and-after intervention study was conducted at 3 hospitals in North Carolina. In March 2021, a CDS tool was incorporated into urine-culture order entry in the electronic health record, providing education about indications for culture and suggesting catheter removal or exchange prior to specimen collection for catheters present >7 days. We used an interrupted time-series analysis with Poisson regression to evaluate the impact of CDS implementation on utilization of urinalyses and urine cultures, antibiotic use, and other outcomes during the pre- and postintervention periods.
Results:
The CDS tool was prompted in 38,361 instances of urine cultures ordered in all patients, including 2,133 catheterized patients during the postintervention study period. There was significant decrease in urine culture orders (1.4% decrease per month; P < .001) and antibiotic use for UTI indications (2.3% decrease per month; P = .006), but there was no significant decline in CAUTI rates in the postintervention period. Clinicians opted for urinary catheter removal in 183 (8.5%) instances. Evaluation of the safety reporting system revealed no apparent increase in safety events related to catheter removal or reinsertion.
Conclusion:
CDS tools can aid in optimizing urine culture collection practices and can serve as a reminder for removal or exchange of long-term indwelling urinary catheters at the time of urine-culture collection.
Sparse recent data are available on the epidemiology of surgical site infections (SSIs) in community hospitals. Our objective was to provide updated epidemiology data on complex SSIs in community hospitals and to characterize trends of SSI prevalence rates over time.
Design:
Retrospective cohort study.
Methods:
SSI data were collected from patients undergoing 26 commonly performed surgical procedures at 32 community hospitals in the southeastern United States from 2013 to 2018. SSI prevalence rates were calculated for each year and were stratified by procedure and causative pathogen.
Results:
Over the 6-year study period, 3,561 complex (deep incisional or organ-space) SSIs occurred following 669,467 total surgeries (prevalence rate, 0.53 infections per 100 procedures). The overall complex SSI prevalence rate did not change significantly during the study period: 0.58 of 100 procedures in 2013 versus 0.53 of 100 procedures in 2018 (prevalence rate ratio [PRR], 0.84; 95% CI, 0.66–1.08; P = .16). Methicillin-sensitive Staphylococcus aureus (MSSA) complex SSIs (n = 480, 13.5%) were more common than complex SSIs caused by methicillin-resistant S. aureus (MRSA; n = 363, 10.2%).
Conclusions:
The complex SSI rate did not decrease in our cohort of community hospitals from 2013 to 2018, which is a change from prior comparisons. The reason for this stagnation is unclear. Additional research is needed to determine the proportion of or remaining SSIs that are preventable and what measures would be effective to further reduce SSI rates.
After implementing a coronavirus disease 2019 (COVID-19) infection prevention bundle, the incidence rate ratio (IRR) of non–severe acute respiratory coronavirus virus 2 (non–SARS-CoV-2) hospital-acquired respiratory viral infection (HA-RVI) was significantly lower than the IRR from the pre–COVID-19 period (IRR, 0.322; 95% CI, 0.266–0.393; P < .01). However, HA-RVIs incidence rates mirrored community RVI trends, suggesting that hospital interventions alone did not significantly affect HA-RVI incidence.
Relapse and recurrence of depression are common, contributing to the overall burden of depression globally. Accurate prediction of relapse or recurrence while patients are well would allow the identification of high-risk individuals and may effectively guide the allocation of interventions to prevent relapse and recurrence.
Aims
To review prognostic models developed to predict the risk of relapse, recurrence, sustained remission, or recovery in adults with remitted major depressive disorder.
Method
We searched the Cochrane Library (current issue); Ovid MEDLINE (1946 onwards); Ovid Embase (1980 onwards); Ovid PsycINFO (1806 onwards); and Web of Science (1900 onwards) up to May 2021. We included development and external validation studies of multivariable prognostic models. We assessed risk of bias of included studies using the Prediction model risk of bias assessment tool (PROBAST).
Results
We identified 12 eligible prognostic model studies (11 unique prognostic models): 8 model development-only studies, 3 model development and external validation studies and 1 external validation-only study. Multiple estimates of performance measures were not available and meta-analysis was therefore not necessary. Eleven out of the 12 included studies were assessed as being at high overall risk of bias and none examined clinical utility.
Conclusions
Due to high risk of bias of the included studies, poor predictive performance and limited external validation of the models identified, presently available clinical prediction models for relapse and recurrence of depression are not yet sufficiently developed for deploying in clinical settings. There is a need for improved prognosis research in this clinical area and future studies should conform to best practice methodological and reporting guidelines.
We performed surveillance for hospital-acquired COVID-19 (HA-COVID-19) and compared time-based, electronic definitions to real-time adjudication of the most likely source of acquisition. Without real-time adjudication, nearly 50% of HA-COVID-19 cases identified using electronic definitions were misclassified. Both electronic and traditional contact tracing methods likely underestimated the incidence of HA-COVID-19.
The paradoxical relationship between standardized infection ratio and standardized utilization ratio for catheter-associated urinary tract infections (CAUTIs) in contrast to central-line–associated bloodstream infections (CLABSIs), in addition to CAUTI definition challenges, incentivizes hospitals to focus their prevention efforts on urine culture stewardship rather than catheter avoidance and care.
To determine the impact of a documented penicillin or cephalosporin allergy on the development of surgical site infections (SSIs).
Background:
Appropriate preoperative antibiotic prophylaxis reduces SSI risk, but documented antibiotic allergies influence the choice of prophylactic agents. Few studies have examined the relationship between a reported antibiotic allergy and risk of SSI and to what extent this relationship is modified by the antibiotic class given for prophylaxis.
Methods:
We conducted a retrospective cohort study of adult patients undergoing coronary artery bypass, craniotomy, spinal fusion, laminectomy, hip arthroplasty and knee arthroplasty at 3 hospitals from July 1, 2013, to December 31, 2017. We built a multivariable logistic regression model to calculate the adjusted odds ratio (aOR) of developing an SSI among patients with and without patient-reported penicillin or cephalosporin allergies. We also examined effect measure modification (EMM) to determine whether surgical prophylaxis affected the association between reported allergy and SSI.
Results:
We analyzed 39,972 procedures; 1,689 (4.2%) with a documented patient penicillin or cephalosporin allergy, and 374 (0.9%) resulted in an SSI. Patients with a reported penicillin or cephalosporin allergy were more likely to develop an SSI compared to patients who did not report an allergy to penicillin or cephalosporins (adjusted odds ratio, 3.26; 95% confidence interval, 2.71–3.93). Surgical prophylaxis did not have significant EMM on this association.
Conclusions:
Patients who reported a penicillin or cephalosporin allergy had higher odds of developing an SSI than nonallergic patients. However, the increase in odds is not completely mediated by the type of surgical prophylaxis. Instead, a reported allergy may be a surrogate marker for a more complicated patient population.
To determine the impact of electronic health record (EHR)–based interventions and test restriction on Clostridioides difficile tests (CDTs) and hospital-onset C. difficile infection (HO-CDI).
Design:
Quasi-experimental study in 3 hospitals.
Setting:
957-bed academic (hospital A), 354-bed (hospital B), and 175-bed (hospital C) academic-affiliated community hospitals.
Interventions:
Three EHR-based interventions were sequentially implemented: (1) alert when ordering a CDT if laxatives administered within 24 hours (January 2018); (2) cancellation of CDT orders after 24 hours (October 2018); (3) contextual rule-driven order questions requiring justification when laxative administered or lack of EHR documentation of diarrhea (July 2019). In February 2019, hospital C implemented a gatekeeper intervention requiring approval for all CDTs after hospital day 3. The impact of the interventions on C. difficile testing and HO-CDI rates was estimated using an interrupted time-series analysis.
Results:
C. difficile testing was already declining in the preintervention period (annual change in incidence rate [IR], 0.79; 95% CI, 0.72–0.87) and did not decrease further with the EHR interventions. The laxative alert was temporally associated with a trend reduction in HO-CDI (annual change in IR from baseline, 0.85; 95% CI, 0.75–0.96) at hospitals A and B. The gatekeeper intervention at hospital C was associated with level (IRR, 0.50; 95% CI, 0.42-0.60) and trend reductions in C. difficile testing (annual change in IR, 0.91; 95% CI, 0.85–0.98) and level (IRR 0.42; 95% CI, 0.22–0.81) and trend reductions in HO-CDI (annual change in IR, 0.68; 95% CI, 0.50–0.92) relative to the baseline period.
Conclusions:
Test restriction was more effective than EHR-based clinical decision support to reduce C. difficile testing in our 3-hospital system.
We reviewed the sustainability of a multifaceted intervention on catheter-associated urinary tract infection (CAUTI) in 3 intensive care units. During the 4-year postintervention period, we observed reductions in urine culture rates (from 80.9 to 47.5 per 1,000 patient days; P < .01), catheter utilization (from 0.68 to 0.58; P < .01), and CAUTI incidence rates (from 1.7 to 0.8 per 1,000 patient days; P = .16).
HIV-associated neurocognitive disorders (HANDs) are prevalent in older people living with HIV (PLWH) worldwide. HAND prevalence and incidence studies of the newly emergent population of combination antiretroviral therapy (cART)-treated older PLWH in sub-Saharan Africa are currently lacking. We aimed to estimate HAND prevalence and incidence using robust measures in stable, cART-treated older adults under long-term follow-up in Tanzania and report cognitive comorbidities.
Design:
Longitudinal study
Participants:
A systematic sample of consenting HIV-positive adults aged ≥50 years attending routine clinical care at an HIV Care and Treatment Centre during March–May 2016 and followed up March–May 2017.
Measurements:
HAND by consensus panel Frascati criteria based on detailed locally normed low-literacy neuropsychological battery, structured neuropsychiatric clinical assessment, and collateral history. Demographic and etiological factors by self-report and clinical records.
Results:
In this cohort (n = 253, 72.3% female, median age 57), HAND prevalence was 47.0% (95% CI 40.9–53.2, n = 119) despite well-managed HIV disease (Mn CD4 516 (98-1719), 95.5% on cART). Of these, 64 (25.3%) were asymptomatic neurocognitive impairment, 46 (18.2%) mild neurocognitive disorder, and 9 (3.6%) HIV-associated dementia. One-year incidence was high (37.2%, 95% CI 25.9 to 51.8), but some reversibility (17.6%, 95% CI 10.0–28.6 n = 16) was observed.
Conclusions:
HAND appear highly prevalent in older PLWH in this setting, where demographic profile differs markedly to high-income cohorts, and comorbidities are frequent. Incidence and reversibility also appear high. Future studies should focus on etiologies and potentially reversible factors in this setting.
We describe the frequency of pediatric healthcare-associated infections (HAIs) identified through prospective surveillance in community hospitals participating in an infection control network. Over a 6-year period, 84 HAIs were identified. Of these 51 (61%) were pediatric central-line–associated bloodstream infections, and they often occurred in children <1 year of age.