We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Almost 12% of the human population have insufficient access to food and hence are at risk from nutrient deficiencies and related conditions, such as anaemia and stunting. Ruminant meat and milk are rich in protein and micronutrients, making them a highly nutritious food source for human consumption. Conversely, ruminant production contributes to methane (CH4) emissions, a greenhouse gas (GHG) with a global warming potential (GWP) 27-30 times greater than that of carbon dioxide (CO2). Nonetheless, ruminant production plays a crucial role in the circular bioeconomy in terms of upcycling agricultural products that cannot be consumed by humans, into valuable and nutritional food, whilst delivering important ecosystem services. Taking on board the complexities of ruminant production and the need to improve both human and planetary health, there is increasing emphasis on developing innovative solutions to achieve sustainable ruminant production within the ‘One Health’ framework. Specifically, research and innovation will undoubtedly continue to focus on 1) Genetics and Breeding; 2) Animal nutrition and 3) Animal Health, to achieve food security and human health, whilst limiting environmental impact. Implementation of resultant innovations within the agri-food sector will require several enablers, including large-scale investment, multi-actor partnerships, scaling, regulatory approval and importantly social acceptability. This review outlines the grand challenges of achieving sustainable ruminant production and likely research and innovation landscape over the next 15 years and beyond, specifically outlining the pathways and enablers required to achieve sustainable ruminant production within the One Health framework.
Older adults with treatment-resistant depression (TRD) benefit more from treatment augmentation than switching. It is useful to identify moderators that influence these treatment strategies for personalised medicine.
Aims
Our objective was to test whether age, executive dysfunction, comorbid medical burden, comorbid anxiety or the number of previous adequate antidepressant trials could moderate the superiority of augmentation over switching. A significant moderator would influence the differential effect of augmentation versus switching on treatment outcomes.
Method
We performed a preplanned moderation analysis of data from the Optimizing Outcomes of Treatment-Resistant Depression in Older Adults (OPTIMUM) randomised controlled trial (N = 742). Participants were 60 years old or older with TRD. Participants were either (a) randomised to antidepressant augmentation with aripiprazole (2.5–15 mg), bupropion (150–450 mg) or lithium (target serum drug level 0.6 mmol/L) or (b) switched to bupropion (150–450 mg) or nortriptyline (target serum drug level 80–120 ng/mL). Treatment duration was 10 weeks. The two main outcomes of this analysis were (a) symptom improvement, defined as change in Montgomery–Asberg Depression Rating Scale (MADRS) scores from baseline to week 10 and (b) remission, defined as MADRS score of 10 or less at week 10.
Results
Of the 742 participants, 480 were randomised to augmentation and 262 to switching. The number of adequate previous antidepressant trials was a significant moderator of depression symptom improvement (b = −1.6, t = −2.1, P = 0.033, 95% CI [−3.0, −0.1], where b is the coefficient of the relationship (i.e. effect size), and t is the t-statistic for that coefficient associated with the P-value). The effect was similar across all augmentation strategies. No other putative moderators were significant.
Conclusions
Augmenting was superior to switching antidepressants only in older patients with fewer than three previous antidepressant trials. This suggests that other intervention strategies should be considered following three or more trials.
This study evaluated the impact of four cover crop species and their termination timings on cover crop biomass, weed control, and corn yield. A field experiment was arranged in a split-plot design in which cover crop species (wheat, cereal rye, hairy vetch, and rapeseed) were the main plot factor, and termination timings [4, 2, 1, and 0 wk before planting corn (WBP)] was the subplot factor. In both years (2021 and 2022), hairy vetch produced the most biomass (5,021 kg ha–1) among cover crop species, followed by cereal rye (4,387 kg ha–1), wheat (3,876 kg ha–1), and rapeseed (2,575 kg ha–1). Regression analysis of cover crop biomass with accumulated growing degree days (AGDDs) indicated that for every 100 AGDD increase, the biomass of cereal rye, wheat, hairy vetch, and rapeseed increased by 880, 670, 780, and 620 kg ha–1, respectively. The density of grass and small-seeded broadleaf (SSB) weeds at 4 wk after preemergence herbicide (WAPR) application varied significantly across termination timings. The grass and SSB weed densities were 56% and 36% less at 0 WBP compared with 2 WBP, and 67% and 61% less compared with 4 WBP. The sole use of a roller-crimper did not affect the termination of rapeseed at 0 WBP and resulted in the least corn yield (3,046 kg ha–1), whereas several different combinations of cover crops and termination timings resulted in greater corn yield. In conclusion, allowing cover crops to grow longer in the spring offers more biomass for weed suppression and impacts corn yield.
Biostatisticians increasingly use large language models (LLMs) to enhance efficiency, yet practical guidance on responsible integration is limited. This study explores current LLM usage, challenges, and training needs to support biostatisticians.
Methods:
A cross-sectional survey was conducted across three biostatistics units at two academic medical centers. The survey assessed LLM usage across three key professional activities: communication and leadership, clinical and domain knowledge, and quantitative expertise. Responses were analyzed using descriptive statistics, while free-text responses underwent thematic analysis.
Results:
Of 208 eligible biostatisticians (162 staff and 46 faculty), 69 (33.2%) responded. Among them, 44 (63.8%) reported using LLMs; of the 43 who answered the frequency question, 20 (46.5%) used them daily and 16 (37.2%) weekly. LLMs improved productivity in coding, writing, and literature review; however, 29 of 41 respondents (70.7%) reported significant errors, including incorrect code, statistical misinterpretations, and hallucinated functions. Key verification strategies included expertise, external validation, debugging, and manual inspection. Among 58 respondents providing training feedback, 44 (75.9%) requested case studies, 40 (69.0%) sought interactive tutorials, and 37 (63.8%) desired structured training.
Conclusions:
LLM usage is notable among respondents at two academic medical centers, though response patterns likely reflect early adopters. While LLMs enhance productivity, challenges like errors and reliability concerns highlight the need for verification strategies and systematic validation. The strong interest in training underscores the need for structured guidance. As an initial step, we propose eight core principles for responsible LLM integration, offering a preliminary framework for structured usage, validation, and ethical considerations.
It remains unclear which individuals with subthreshold depression benefit most from psychological intervention, and what long-term effects this has on symptom deterioration, response and remission.
Aims
To synthesise psychological intervention benefits in adults with subthreshold depression up to 2 years, and explore participant-level effect-modifiers.
Method
Randomised trials comparing psychological intervention with inactive control were identified via systematic search. Authors were contacted to obtain individual participant data (IPD), analysed using Bayesian one-stage meta-analysis. Treatment–covariate interactions were added to examine moderators. Hierarchical-additive models were used to explore treatment benefits conditional on baseline Patient Health Questionnaire 9 (PHQ-9) values.
Results
IPD of 10 671 individuals (50 studies) could be included. We found significant effects on depressive symptom severity up to 12 months (standardised mean-difference [s.m.d.] = −0.48 to −0.27). Effects could not be ascertained up to 24 months (s.m.d. = −0.18). Similar findings emerged for 50% symptom reduction (relative risk = 1.27–2.79), reliable improvement (relative risk = 1.38–3.17), deterioration (relative risk = 0.67–0.54) and close-to-symptom-free status (relative risk = 1.41–2.80). Among participant-level moderators, only initial depression and anxiety severity were highly credible (P > 0.99). Predicted treatment benefits decreased with lower symptom severity but remained minimally important even for very mild symptoms (s.m.d. = −0.33 for PHQ-9 = 5).
Conclusions
Psychological intervention reduces the symptom burden in individuals with subthreshold depression up to 1 year, and protects against symptom deterioration. Benefits up to 2 years are less certain. We find strong support for intervention in subthreshold depression, particularly with PHQ-9 scores ≥ 10. For very mild symptoms, scalable treatments could be an attractive option.
The Hippoboscidae are ectoparasites of birds and mammals, which, as a group, are known to vector multiple diseases. Avipoxvirus (APV) is mechanically vectored by various arthropods and causes seasonal disease in wild birds in the United Kingdom (UK). Signs of APV and the presence of louse flies (Hippoboscidae) on Dunnocks Prunella modularis were recorded over a 16·5-year period in a rural garden in Somerset, UK. Louse flies collected from this site and other sites in England were tested for the presence of APV DNA and RNA sequences. Louse flies on Dunnocks were seen to peak seasonally three weeks prior to the peak of APV lesions, an interval consistent with the previously estimated incubation period of APV in Dunnocks. APV DNA was detected on 13/25 louse flies, Ornithomya avicularia and Ornithomya fringillina, taken from Dunnocks, both with and without lesions consistent with APV, at multiple sites in England. Collectively these data support the premise that louse flies may vector APV. The detection of APV in louse flies, from apparently healthy birds, and from sites where disease has not been observed in any host species, suggests that the Hippoboscidae could provide a non-invasive and relatively cheap method of monitoring avian diseases. This could provide advanced warnings of disease, including zoonoses, before they become clinically apparent.
Negative symptoms are a key feature of several psychiatric disorders. Difficulty identifying common neurobiological mechanisms that cut across diagnostic boundaries might result from equifinality (i.e., multiple mechanistic pathways to the same clinical profile), both within and across disorders. This study used a data-driven approach to identify unique subgroups of participants with distinct reward processing profiles to determine which profiles predicted negative symptoms.
Methods
Participants were a transdiagnostic sample of youth from a multisite study of psychosis risk, including 110 individuals at clinical high-risk for psychosis (CHR; meeting psychosis-risk syndrome criteria), 88 help-seeking participants who failed to meet CHR criteria and/or who presented with other psychiatric diagnoses, and a reference group of 66 healthy controls. Participants completed clinical interviews and behavioral tasks assessing four reward processing constructs indexed by the RDoC Positive Valence Systems: hedonic reactivity, reinforcement learning, value representation, and effort–cost computation.
Results
k-means cluster analysis of clinical participants identified three subgroups with distinct reward processing profiles, primarily characterized by: a value representation deficit (54%), a generalized reward processing deficit (17%), and a hedonic reactivity deficit (29%). Clusters did not differ in rates of clinical group membership or psychiatric diagnoses. Elevated negative symptoms were only present in the generalized deficit cluster, which also displayed greater functional impairment and higher psychosis conversion probability scores.
Conclusions
Contrary to the equifinality hypothesis, results suggested one global reward processing deficit pathway to negative symptoms independent of diagnostic classification. Assessment of reward processing profiles may have utility for individualized clinical prediction and treatment.
The reliable change index has been used to evaluate the significance of individual change in health-related quality of life. We estimate reliable change for two measures (physical function and emotional distress) in the Patient-Reported Outcomes Measurement Information System (PROMIS®) 29-item health-related quality of life measure (PROMIS-29 v2.1). Using two waves of data collected 3 months apart in a longitudinal observational study of chronic low back pain and chronic neck pain patients receiving chiropractic care, and simulations, we compare estimates of reliable change from classical test theory fixed standard errors with item response theory standard errors from the graded response model. We find that unless true change in the PROMIS physical function and emotional distress scales is substantial, classical test theory estimates of significant individual change are much more optimistic than estimates of change based on item response theory.
Item response theory (IRT) model applications extend well beyond cognitive ability testing, and various patient-reported outcomes (PRO) measures are among the more prominent examples. PRO (and like) constructs differ from cognitive ability constructs in many ways, and these differences have model fitting implications. With a few notable exceptions, however, most IRT applications to PRO constructs rely on traditional IRT models, such as the graded response model. We review some notable differences between cognitive and PRO constructs and how these differences can present challenges for traditional IRT model applications. We then apply two models (the traditional graded response model and an alternative log-logistic model) to depression measure data drawn from the Patient-Reported Outcomes Measurement Information System project. We do not claim that one model is “a better fit” or more “valid” than the other; rather, we show that the log-logistic model may be more consistent with the construct of depression as a unipolar phenomenon. Clearly, the graded response and log-logistic models can lead to different conclusions about the psychometrics of an instrument and the scaling of individual differences. We underscore, too, that, in general, explorations of which model may be more appropriate cannot be decided only by fit index comparisons; these decisions may require the integration of psychometrics with theory and research findings on the construct of interest.
The past few years were marked by increased online offensive strategies perpetrated by state and non-state actors to promote their political agenda, sow discord, and question the legitimacy of democratic institutions in the US and Western Europe. In 2016, the US congress identified a list of Russian state-sponsored Twitter accounts that were used to try to divide voters on a wide range of issues. Previous research used latent Dirichlet allocation (LDA) to estimate latent topics in data extracted from these accounts. However, LDA has characteristics that may limit the effectiveness of its use on data from social media: The number of latent topics must be specified by the user, interpretability of the topics can be difficult to achieve, and it does not model short-term temporal dynamics. In the current paper, we propose a new method to estimate latent topics in texts from social media termed Dynamic Exploratory Graph Analysis (DynEGA). In a Monte Carlo simulation, we compared the ability of DynEGA and LDA to estimate the number of simulated latent topics. The results show that DynEGA is substantially more accurate than several different LDA algorithms when estimating the number of simulated topics. In an applied example, we performed DynEGA on a large dataset with Twitter posts from state-sponsored right- and left-wing trolls during the 2016 US presidential election. DynEGA revealed topics that were pertinent to several consequential events in the election cycle, demonstrating the coordinated effort of trolls capitalizing on current events in the USA. This example demonstrates the potential power of our approach for revealing temporally relevant information from qualitative text data.
Ambulatory antimicrobial stewardship can be challenging due to disparities in resource allocation across the care continuum, competing priorities for ambulatory prescribers, ineffective communication strategies, and lack of incentive to prioritize antimicrobial stewardship program (ASP) initiatives. Efforts to monitor and compare outpatient antibiotic usage metrics have been implemented through quality measures (QM). Healthcare Effectiveness Data and Information Set (HEDIS®) represent standardized measures that examine the quality of antibiotic prescribing by region and across insurance health plans. Health systems with affiliated emergency departments and ambulatory clinics contribute patient data for HEDIS measure assessment and are directly related to value-based reimbursement, pay-for-performance, patient satisfaction measures, and payor incentives and rewards. There are four HEDIS® measures related to optimal antibiotic prescribing in upper respiratory tract diseases that ambulatory ASPs can leverage to develop and measure effective interventions while maintaining buy-in from providers: avoidance of antibiotic treatment for acute bronchitis/bronchiolitis, appropriate treatment for upper respiratory infection, appropriate testing for pharyngitis, and antibiotic utilization for respiratory conditions. Additionally, there are other QM assessed by the Centers for Medicare and Medicaid Services (CMS), including overuse of antibiotics for adult sinusitis. Ambulatory ASPs with limited resources should leverage HEDIS® to implement and measure successful interventions due to their pay-for-performance nature. The purpose of this review is to outline the HEDIS® measures related to infectious diseases in ambulatory care settings. This review also examines the barriers and enablers in ambulatory ASPs which play a crucial role in promoting responsible antibiotic use and the efforts to optimize patient outcomes.
The COVID-19 pandemic highlighted gaps in infection control knowledge and practice across health settings nationwide. The Centers for Disease Control and Prevention, with funding through the American Rescue Plan, developed Project Firstline. Project Firstline is a national collaborative aiming to reach all aspects of the health care frontline. The American Medical Association recruited eight physicians and one medical student to join their director of infectious diseases to develop educational programs targeting knowledge gaps. They have identified 5 critical areas requiring national attention.
Here, we report the first discovery of Antarctic fossil resin (commonly referred to as amber) within a ~5 cm-thick lignite layer, which constitutes the top part of a ~3 m-long palynomorph-rich and root-bearing carbonaceous mudstone of mid-Cretaceous age (Klages et al.2020). The sedimentary sequence (Fig. 1) was recovered by the MARUM-MeBo70 seafloor drill rig at Site PS104_20 (73.57° S, 107.09° W; 946 m water depth) from the mid-shelf section of Pine Island trough in the Amundsen Sea Embayment, West Antarctica, during RV Polarstern Expedition PS104 in early 2017 (Gohl 2017; Fig. 1a). So far, amber deposits have been described from every continent except Antarctica (Langenheim 2003, Quinney et al. 2015; Fig. 1a).
Post-procedural antimicrobial prophylaxis is not recommended by professional guidelines but is commonly prescribed. We sought to reduce use of post-procedural antimicrobials after common endoscopic urologic procedures.
Design:
A before-after, quasi-experimental trial with a baseline (July 2020–June 2022), an implementation (July 2022), and an intervention period (August 2022–July 2023).
Setting:
Three participating medical centers.
Intervention:
We assessed the effect of a bundled intervention on excess post-procedural antimicrobial use (ie, antimicrobial use on post-procedural day 1) after three types of endoscopic urologic procedures: ureteroscopy and transurethral resection of bladder tumor or prostate. The intervention consisted of education, local champion(s), and audit-and-feedback of data on the frequency of post-procedural antimicrobial-prescribing.
Results:
1,272 procedures were performed across all 3 sites at baseline compared to 525 during the intervention period; 644 (50.6%) patients received excess post-procedural antimicrobials during the baseline period compared to 216 (41.1%) during the intervention period. There was no change in the use of post-procedural antimicrobials at sites 1 and 2 between the baseline and intervention periods. At site 3, the odds of prescribing a post-procedural antimicrobial significantly decreased during the intervention period relative to the baseline time trend (0.09; 95% CI 0.02–0.45). There was no significant increase in post-procedural unplanned visits at any of the sites.
Conclusions:
Implementation of a bundled intervention was associated with reduced post-procedural antimicrobial use at one of three sites, with no increase in complications. These findings demonstrate both the safety and challenge of guideline implementation for optimal perioperative antimicrobial prophylaxis.
This trial was registered on clinicaltrials.gov, NCT04196777.
A 54-question survey about System Healthcare Infection Prevention Programs (SHIPPs) was sent out to SHEA Research Network participants in August 2023. Thirty-eight United States-based institutions responded (38/93, 41%), of which 23 have SHIPPs. We found heterogeneity in the structure, staffing, and resources for system infection prevention (IP) programs.
Identify risk factors for central line-associated bloodstream infections (CLABSI) in pediatric intensive care settings in an era with high focus on prevention measures.
Design:
Matched, case–control study.
Setting:
Quaternary children’s hospital.
Patients:
Cases had a CLABSI during an intensive care unit (ICU) stay between January 1, 2015 and December 31, 2020. Controls were matched 4:1 by ICU and admission date and did not develop a CLABSI.
Methods:
Multivariable, mixed-effects logistic regression.
Results:
129 cases were matched to 516 controls. Central venous catheter (CVC) maintenance bundle compliance was >70%. Independent CLABSI risk factors included administration of continuous non-opioid sedative (adjusted odds ratio (aOR) 2.96, 95% CI [1.16, 7.52], P = 0.023), number of days with one or more CVC in place (aOR 1.42 per 10 days [1.16, 1.74], P = 0.001), and the combination of a chronic CVC with administration of parenteral nutrition (aOR 4.82 [1.38, 16.9], P = 0.014). Variables independently associated with lower odds of CLABSI included CVC location in an upper extremity (aOR 0.16 [0.05, 0.55], P = 0.004); non-tunneled CVC (aOR 0.17 [0.04, 0.63], P = 0.008); presence of an endotracheal tube (aOR 0.21 [0.08, 0.6], P = 0.004), Foley catheter (aOR 0.3 [0.13, 0.68], P = 0.004); transport to radiology (aOR 0.31 [0.1, 0.94], P = 0.039); continuous neuromuscular blockade (aOR 0.29 [0.1, 0.86], P = 0.025); and administration of histamine H2 blocking medications (aOR 0.17 [0.06, 0.48], P = 0.001).
Conclusions:
Pediatric intensive care patients with chronic CVCs receiving parenteral nutrition, those on non-opioid sedative infusions, and those with more central line days are at increased risk for CLABSI despite current prevention measures.
Diagnostic criteria for major depressive disorder allow for heterogeneous symptom profiles but genetic analysis of major depressive symptoms has the potential to identify clinical and etiological subtypes. There are several challenges to integrating symptom data from genetically informative cohorts, such as sample size differences between clinical and community cohorts and various patterns of missing data.
Methods
We conducted genome-wide association studies of major depressive symptoms in three cohorts that were enriched for participants with a diagnosis of depression (Psychiatric Genomics Consortium, Australian Genetics of Depression Study, Generation Scotland) and three community cohorts who were not recruited on the basis of diagnosis (Avon Longitudinal Study of Parents and Children, Estonian Biobank, and UK Biobank). We fit a series of confirmatory factor models with factors that accounted for how symptom data was sampled and then compared alternative models with different symptom factors.
Results
The best fitting model had a distinct factor for Appetite/Weight symptoms and an additional measurement factor that accounted for the skip-structure in community cohorts (use of Depression and Anhedonia as gating symptoms).
Conclusion
The results show the importance of assessing the directionality of symptoms (such as hypersomnia versus insomnia) and of accounting for study and measurement design when meta-analyzing genetic association data.
The cognitive deterioration of politicians is a critical emerging issue. As professions including law and medicine develop and implement cognitive assessments, their insights may inform the proper strategy within politics. The aging, lifetime-appointed judiciary raises legal and administrative questions of such assessments, while testing of older physicians experiencing cognitive decline provides real-life examples of implementation. In politics, cognitive assessment must contend with the field’s unique challenges, also taking context-dependent interpretations of cognitive-neuropsychological status into account. These perspectives, from legal and medical experts, political scientists, and officeholders, can contribute toward an equitable, functioning, and non-discriminatory system of assessing cognition that educates the public and enables politicians to maintain their public responsibilities. With proper implementation and sufficient public knowledge, we believe cognitive assessments for politicians, particularly political candidates, can be valuable for maintaining properly functioning governance. We offer recommendations on the development, implementation, and execution of such assessments, grappling with their democratic and legal implications.