We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In order to cast a satisfying vote, understand politics, or otherwise participate in political discourse or processes, voters must have some idea of what policies parties are pursuing and, more generally, 'who goes with whom.' This Element aims to both advance the study of how voters formulate and update their perceptions of party brands and persuade our colleagues to join us in studying these processes. To make this endeavor more enticing, but no less rigorous, the authors make three contributions to this emerging field of study: presenting a framework for building and interrogating theoretical arguments, aggregating a large, comprehensive data archive, and recommending a parsimonious strategy for statistical analysis. In the process, they provide a definition for voters' perceptions of party brands and an analytical schema to study them, attempt to contextualize and rationalize some competing findings in the existing literature, and derive and test several new hypotheses.
Posttraumatic stress disorder (PTSD) has been associated with advanced epigenetic age cross-sectionally, but the association between these variables over time is unclear. This study conducted meta-analyses to test whether new-onset PTSD diagnosis and changes in PTSD symptom severity over time were associated with changes in two metrics of epigenetic aging over two time points.
Methods
We conducted meta-analyses of the association between change in PTSD diagnosis and symptom severity and change in epigenetic age acceleration/deceleration (age-adjusted DNA methylation age residuals as per the Horvath and GrimAge metrics) using data from 7 military and civilian cohorts participating in the Psychiatric Genomics Consortium PTSD Epigenetics Workgroup (total N = 1,367).
Results
Meta-analysis revealed that the interaction between Time 1 (T1) Horvath age residuals and new-onset PTSD over time was significantly associated with Horvath age residuals at T2 (meta β = 0.16, meta p = 0.02, p-adj = 0.03). The interaction between T1 Horvath age residuals and changes in PTSD symptom severity over time was significantly related to Horvath age residuals at T2 (meta β = 0.24, meta p = 0.05). No associations were observed for GrimAge residuals.
Conclusions
Results indicated that individuals who developed new-onset PTSD or showed increased PTSD symptom severity over time evidenced greater epigenetic age acceleration at follow-up than would be expected based on baseline age acceleration. This suggests that PTSD may accelerate biological aging over time and highlights the need for intervention studies to determine if PTSD treatment has a beneficial effect on the aging methylome.
SHEA, in partnership with ASGE, APIC, AAMI, AORN, HSPA, IDSA, SGNA, and The Joint Commission, developed this multisociety infection prevention guidance document for individuals and organizations that engage in sterilization or high-level disinfection (HLD). This document follows the CDC Guideline for Disinfection and Sterilization in Healthcare Facilities. This guidance is based on a synthesis of published scientific evidence, theoretical rationale, current practices, practical considerations, writing group consensus, and consideration of potential harm when applicable. The supplementary material includes a summary of recommendations. The guidance provides an overview of the Spaulding Classification and considerations around manufacturers’ instructions for use (MIFUs). Its recommendations address: point-of-use treatment prior to sterilization or HLD, preparation of reusable medical devices at the location of processing, sterilization, and immediate use steam sterilization (IUSS), HLD of lumened and non-lumened devices, processing of reusable medical devices used with lubricating or defoaming agents, monitoring for effectiveness of processing, handling of devices after HLD, augments and alternatives to HLD, processing of investigational devices, tracking of reusable medical devices, and approaches to implementation.
Commercializing targeted sprayer systems allows producers to reduce herbicide inputs but risks the possibility of not treating emerging weeds. Currently, targeted applications with the John Deere system allow for five spray sensitivity settings, and no published literature discusses the impact of these settings on detecting and spraying weeds of varying species, sizes, and positions in crops. Research was conducted in AR, IL, IN, MS, and NC in corn, cotton, and soybean to determine how various factors might influence the ability of targeted applications to treat weeds. These data included 21 weed species aggregated to six classes with height, width, and densities, ranging from 25 to 0.25 cm, 25 to 0.25 cm, and 14.3 to 0.04 plants m-2, respectively. Crop and weed density did not influence the likelihood of treating the weeds. As expected, the sensitivity setting alters the ability to treat weeds. Targeted applications (across sensitivity settings, median weed height and width, and density of 2.4 plants m-2) resulted in a treatment success of 99.6% to 84.4%, 99.1% to 68.8%, 98.9% to 62.9%, 99.1% to 70.3%, 98.0% to 48.3%, and 98.5% to 55.8% for Convolvulaceae, decumbent broadleaf weeds, Malvaceae, Poaceae, Amaranthaceae, and yellow nutsedge, respectively. Reducing the sensitivity setting reduced the ability to treat weeds. Size of weeds aided targeted application success, with larger weeds being more readily treated through easier detection. Based on these findings, various conditions could impact the outcome of targeted multi-nozzle applications. Additionally, the analyses highlight some of the parameters to consider when using these technologies.
Objectives/Goals: This study demonstrates the utility of the CBID biodesign process for identifying and prioritizing high-impact neurosurgical needs. The research emphasizes the process’s role in developing innovative medical technologies that align with the healthcare ecosystem’s demands and stakeholder priorities. Methods/Study Population: The CBID Spiral Innovation Model, integrating clinical, technical, business, and strategic considerations across clinical challenges in neurosurgery was employed over a 15-week period at a tertiary care center. The process involved three phases: (1) needs identification through 8 weeks of clinical immersion, (2) 7–8 weeks of stakeholder engagement via informational interviews, surveys, and conferences, and (3) iterative refinement based on evidence generation and market value. Stakeholders included over 70 clinicians (neurosurgeons, neurocritical care specialists, neurologists, etc.) across 15 institutions as well as more than 10 payers and hospital administrators. Data collection encompassed direct observation, structured interviews, and comprehensive literature review. Results/Anticipated Results: The initial list of 300+ identified neurosurgical needs was reduced to 271 after clinician and market input. High-level market and clinical evidence assessments further reduced this to 74 needs. Finally, through iterative evaluation of evidence generation, market opportunity, and stakeholder feedback, five critical unmet needs in stroke, traumatic brain injury, hydrocephalus, and epilepsy were identified for technological innovation. These needs met the criteria for clinical importance, economic viability, and market accessibility. The findings highlight the effectiveness of the biodesign process in creating a roadmap for innovation that is both clinically relevant and commercially viable. Discussion/Significance of Impact: This study underscores the effectiveness of structured need-finding and prioritization within neurosurgery. Integrating stakeholder perspectives and rigorous analysis, it provides a replicable framework for medical innovation to accelerate the development of impactful solutions across medicine.
Interprofessional teams in the pediatric cardiac ICU consolidate their management plans in pre-family meeting huddles, a process that affects the course of family meetings but often lacks optimal communication and teamwork.
Methods:
Cardiac ICU clinicians participated in an interprofessional intervention to improve how they prepared for and conducted family meetings. We conducted a pretest–posttest study with clinicians participating in huddles before family meetings. We assessed feasibility of clinician enrollment, assessed clinician perception of acceptability of the intervention via questionnaire and semi-structured interviews, and impact on team performance using a validated tool. Wilcoxon rank sum test assessed intervention impact on team performance at meeting level comparing pre- and post-intervention data.
Results:
Totally, 24 clinicians enrolled in the intervention (92% retention) with 100% completion of training. All participants recommend cardiac ICU Teams and Loved ones Communicating to others and 96% believe it improved their participation in family meetings. We exceeded an acceptable level of protocol fidelity (>75%). Team performance was significantly (p < 0.001) higher in post-intervention huddles (n = 30) than in pre-intervention (n = 28) in all domains. Median comparisons: Team structure [2 vs. 5], Leadership [3 vs. 5], Situation Monitoring [3 vs. 5], Mutual Support [ 3 vs. 5], and Communication [3 vs. 5].
Conclusion:
Implementing an interprofessional team intervention to improve team performance in pre-family meeting huddles is feasible, acceptable, and improves team function. Future research should further assess impact on clinicians, patients, and families.
This study evaluated the impact of 2015/2016 prescribing guidance on antidepressant prescribing choices in children.
Methods
A retrospective e-cohort study of whole population routine electronic healthcare records was conducted. Poisson regression was undertaken to explore trends over time for depression, antidepressant prescribing, indications and secondary care contacts. Time trend analysis was conducted to assess the impact of guidance.
Results
A total of 643 322 primary care patients in Wales UK, aged 6–17 years from 2010–2019 contributed 3 215 584 person-years of follow-up. Adjusted incidence of depression more than doubled (IRR for 2019 = 2.8 [2.5–3.2]) with similar trends seen for antidepressants. Fluoxetine was the most frequently prescribed first-line antidepressant. Citalopram comprised less than 5% of first prescriptions in younger children but 22.9% (95% CI 22.0–23.8; 95% CI 2533) in 16–17-year-olds. Approximately half of new antidepressant prescribing was associated with depression. Segmented regression analysis showed that prescriptions of ‘all’ antidepressants, Fluoxetine and Sertraline were increasing before the guidance. This upward trend flattened for both ‘all’ antidepressants and Fluoxetine and steepened for Sertraline. Citalopram prescribing was decreasing significantly pre guidance being issued with no significant change afterward.
Conclusions
Targeted intervention is needed to address rising rates of depression in children. Practitioners are partially adhering to local and national guidance. The decision-making process behind prescribing choices is likely to be multi-factorial. Activities to support implementation of guidance should be adopted in relation to safety in prescribing of antidepressants in children including timely availability of talking therapies and specialist mental health services.
Whole genome sequencing (WGS) can help identify transmission of pathogens causing healthcare-associated infections (HAIs). However, the current gold standard of short-read, Illumina-based WGS is labor and time intensive. Given recent improvements in long-read Oxford Nanopore Technologies (ONT) sequencing, we sought to establish a low resource approach providing accurate WGS-pathogen comparison within a time frame allowing for infection prevention and control (IPC) interventions.
Methods:
WGS was prospectively performed on pathogens at increased risk of potential healthcare transmission using the ONT MinION sequencer with R10.4.1 flow cells and Dorado basecaller. Potential transmission was assessed via Ridom SeqSphere+ for core genome multilocus sequence typing and MINTyper for reference-based core genome single nucleotide polymorphisms using previously published cutoff values. The accuracy of our ONT pipeline was determined relative to Illumina.
Results:
Over a six-month period, 242 bacterial isolates from 216 patients were sequenced by a single operator. Compared to the Illumina gold standard, our ONT pipeline achieved a mean identity score of Q60 for assembled genomes, even with a coverage rate as low as 40×. The mean time from initiating DNA extraction to complete analysis was 2 days (IQR 2–3.25 days). We identified five potential transmission clusters comprising 21 isolates (8.7% of sequenced strains). Integrating ONT with epidemiological data, >70% (15/21) of putative transmission cluster isolates originated from patients with potential healthcare transmission links.
Conclusions:
Via a stand-alone ONT pipeline, we detected potentially transmitted HAI pathogens rapidly and accurately, aligning closely with epidemiological data. Our low-resource method has the potential to assist in IPC efforts.
Galaxy Zoo is an online project to classify morphological features in extra-galactic imaging surveys with public voting. In this paper, we compare the classifications made for two different surveys, the Dark Energy Spectroscopic Instrument (DESI) imaging survey and a part of the Kilo-Degree Survey (KiDS), in the equatorial fields of the Galaxy And Mass Assembly (GAMA) survey. Our aim is to cross-validate and compare the classifications based on different imaging quality and depth. We find that generally the voting agrees globally but with substantial scatter, that is, substantial differences for individual galaxies. There is a notable higher voting fraction in favour of ‘smooth’ galaxies in the DESI+zoobot classifications, most likely due to the difference between imaging depth. DESI imaging is shallower and slightly lower resolution than KiDS and the Galaxy Zoo images do not reveal details such as disc features and thus are missed in the zoobot training sample. We check against expert visual classifications and find good agreement with KiDS-based Galaxy Zoo voting. We reproduce the results from Porter-Temple+ (2022), on the dependence of stellar mass, star formation, and specific star formation on the number of spiral arms. This shows that once corrected for redshift, the DESI Galaxy Zoo and KiDS Galaxy Zoo classifications agree well on population properties. The zoobot cross-validation increases confidence in its ability to compliment Galaxy Zoo classifications and its ability for transfer learning across surveys.
A daily prompt to offer vaccination to inpatients awaiting transfer to rehabilitation resulted in increased SARS-CoV-2 (OR 5.64, 95% CI 3.3–10.15; P < 0.001) and influenza (OR 3.80, 95% CI 2.45–6.06; P < 0.001) vaccination. Compared to baseline, this intervention was associated with reduced incidence of viral respiratory infection during subsequent admission to rehabilitation.
Residual blood specimens collected at health facilities may be a source of samples for serosurveys of adults, a population often neglected in community-based serosurveys. Anonymized residual blood specimens were collected from individuals 15 – 49 years of age attending two sub-district hospitals in Palghar District, Maharashtra, from November 2018 to March 2019. Specimens also were collected from women 15 – 49 years of age enrolled in a cross-sectional, community-based serosurvey representative at the district level that was conducted 2 – 7 months after the residual specimen collection. Specimens were tested for IgG antibodies to measles and rubella viruses. Measles and rubella seroprevalence estimates using facility-based specimens were 99% and 92%, respectively, with men having significantly lower rubella seropositivity than women. Age-specific measles and rubella seroprevalence estimates were similar between the two specimen sources. Although measles seropositivity was slightly higher among adults attending the facilities, both facility and community measles seroprevalence estimates were 95% or higher. The similarity in measles and rubella seroprevalence estimates between the community-based and facility serosurveys highlights the potential value of residual specimens to approximate community seroprevalence.
Residual blood specimens provide a sample repository that could be analyzed to estimate and track changes in seroprevalence with fewer resources than household-based surveys. We conducted parallel facility and community-based cross-sectional serological surveys in two districts in India, Kanpur Nagar District, Uttar Pradesh, and Palghar District, Maharashtra, before and after a measles-rubella supplemental immunization activity (MR-SIA) from 2018 to 2019. Anonymized residual specimens from children 9 months to younger than 15 years of age were collected from public and private diagnostic laboratories and public hospitals and tested for IgG antibodies to measles and rubella viruses. Significant increases in seroprevalence were observed following the MR SIA using the facility-based specimens. Younger children whose specimens were tested at a public facility in Kanpur Nagar District had significantly lower rubella seroprevalence prior to the SIA compared to those attending a private hospital, but this difference was not observed following the SIA. Similar increases in rubella seroprevalence were observed in facility-based and community-based serosurveys following the MR SIA, but trends in measles seroprevalence were inconsistent between the two specimen sources. Despite challenges with representativeness and limited metadata, residual specimens can be useful in estimating seroprevalence and assessing trends through facility-based sentinel surveillance.
New machine-vision technologies like the John Deere See & Spray™ could provide the opportunity to reduce herbicide use by detecting weeds and target-spraying herbicides simultaneously. Experiments were conducted for 2 yr in Keiser, AR, and Greenville, MS, to compare residual herbicide timings and targeted spray applications versus traditional broadcast herbicide programs in glyphosate/glufosinate/dicamba-resistant soybean. Treatments utilized consistent herbicides and rates with a preemergence (PRE) application followed by an early postemergence (EPOST) dicamba application followed by a mid-postemergence (MPOST) glufosinate application. All treatments included a residual at PRE and excluded or included a residual EPOST and MPOST. Additionally, the herbicide application method was considered, with traditional broadcast applications, broadcasted residual + targeted applications of postemergence herbicides (dual tank), or targeted applications of all herbicides (single tank). Targeted applications provided comparable control to broadcast applications with a ≤1% decrease in efficacy and overall control ≥93% for Palmer amaranth, broadleaf signalgrass, morningglory species, and purslane species. Additionally, targeted sprays slightly reduced soybean injury by at most 5 percentage points across all evaluations, and these effects did not translate to a yield increase at harvest. The relationship between weed area and targeted sprayed area also indicates that nozzle angle can influence potential herbicide savings, with narrower nozzle angles spraying less area. On average, targeted sprays saved a range of 28.4% to 62.4% on postemergence herbicides. On the basis of these results, with specific machine settings, targeted application programs could reduce the amount of herbicide applied while providing weed control comparable to that of traditional broadcast applications.
Motor neuron disease (MND) is a progressive, fatal, neurodegenerative condition that affects motor neurons in the brain and spinal cord, resulting in loss of the ability to move, speak, swallow and breathe. Acceptance and commitment therapy (ACT) is an acceptance-based behavioural therapy that may be particularly beneficial for people living with MND (plwMND). This qualitative study aimed to explore plwMND’s experiences of receiving adapted ACT, tailored to their specific needs, and therapists’ experiences of delivering it.
Method:
Semi-structured qualitative interviews were conducted with plwMND who had received up to eight 1:1 sessions of adapted ACT and therapists who had delivered it within an uncontrolled feasibility study. Interviews explored experiences of ACT and how it could be optimised for plwMND. Interviews were audio recorded, transcribed and analysed using framework analysis.
Results:
Participants were 14 plwMND and 11 therapists. Data were coded into four over-arching themes: (i) an appropriate tool to navigate the disease course; (ii) the value of therapy outweighing the challenges; (iii) relevance to the individual; and (iv) involving others. These themes highlighted that ACT was perceived to be acceptable by plwMND and therapists, and many participants reported or anticipated beneficial outcomes in the future, despite some therapeutic challenges. They also highlighted how individual factors can influence experiences of ACT, and the potential benefit of involving others in therapy.
Conclusions:
Qualitative data supported the acceptability of ACT for plwMND. Future research and clinical practice should address expectations and personal relevance of ACT to optimise its delivery to plwMND.
Key learning aims
(1) To understand the views of people living with motor neuron disease (plwMND) and therapists on acceptance and commitment therapy (ACT) for people living with this condition.
(2) To understand the facilitators of and barriers to ACT for plwMND.
(3) To learn whether ACT that has been tailored to meet the specific needs of plwMND needs to be further adapted to potentially increase its acceptability to this population.
Head and neck squamous cell carcinomas (HNSCCs) are aggressive tumours lacking a standardised timeline for treatment initiation post-diagnosis. Delays beyond 60 days are linked to poorer outcomes and higher recurrence risk.
Methods:
A retrospective review was conducted on patients over 18 with HNSCC treated with (chemo)radiation at a rural tertiary care centre (September 2020–2022). Data on patient demographics, oncologic characteristics, treatment details and delay causes were analysed using SPSS.
Results:
Out of 93 patients, 35.5% experienced treatment initiation delays (TTIs) over 60 days. Median TTI was 73 days for delayed cases, compared to 41.5 days otherwise. No significant differences in demographics or cancer characteristics were observed between groups. The primary reasons for the delay were care coordination (69.7%) and patient factors (18.2%). AJCC cancer stage showed a trend towards longer delays in advanced stages.
Conclusion:
One-third of patients faced delayed TTI, primarily due to care coordination and lack of social support. These findings highlight the need for improved multidisciplinary communication and patient support mechanisms, suggesting potential areas for quality improvement in HNSCC treatment management.
Creating a sustainable residency research program is necessary to develop a sustainable research pipeline, as highlighted by the recent Society for Academic Emergency Medicine 2024 Consensus Conference. We sought to describe the implementation of a novel, immersive research program for first-year emergency medicine residents. We describe the curriculum development, rationale, implementation process, and lessons learned from the implementation of a year-long research curriculum for first-year residents. We further evaluated resident perception of confidence in research methodology, interest in research, and the importance of their research experience through a 32-item survey. In two cohorts, 25 first-year residents completed the program. All residents met their scholarly project requirements by the end of their first year. Two conference abstracts and one peer-reviewed publication were accepted for publication, and one is currently under review. Survey responses indicated that there was an increase in residents’ perceived confidence in research methodology, but this was limited by the small sample size. In summary, this novel resident research curriculum demonstrated a standardized, reproducible, and sustainable approach to provide residents with an immersive research program.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
To examine the relationship between race and ethnicity and central line-associated bloodstream infections (CLABSI) while accounting for inherent differences in CLABSI risk related to central venous catheter (CVC) type.
Design:
Retrospective cohort analysis.
Setting:
Acute care facilities within an academic healthcare system.
Patients:
Adult inpatients from January 2012 through December 2017 with CVC present for ≥2 contiguous days.
Methods:
We describe variability in demographics, comorbidities, CVC type/configuration, and CLABSI rate by patient’s race and ethnicity. We estimated the unadjusted risk of CLABSI for each demographic and clinical characteristic and then modelled the effect of race on time to CLABSI, adjusting for total parenteral nutrition use and CVC type. We also performed exploratory analysis replacing race and ethnicity with social vulnerability index (SVI) metrics.
Results:
32,925 patients with 57,642 CVC episodes met inclusion criteria, most of which (51,348, 89%) were among non-Hispanic White or non-Hispanic Black patients. CVC types differed between race/ethnicity groups. However, after adjusting for CVC type, configuration, and indication in an adjusted cox regression, the risk of CLABSI among non-Hispanic Black patients did not significantly differ from non-Hispanic White patients (adjusted hazard ratio [aHR] 1.19; 95% confidence interval [CI]: 0.94, 1.51). The odds of having a CLABSI among the most vulnerable SVI subset compared to the less vulnerable was no different (odds ratio [OR] 0.95; 95% CI: 0.75–1.2).
Conclusions:
We did not find a difference in CLABSI risk between non-Hispanic White and non-Hispanic Black patients when adjusting for CLABSI risk inherent in type and configuration of CVC.