We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Pot studies outdoors under natural environmental conditions were conducted to determine leafy spurge biomass reduction resulting from broadcast application of 2,4-D (2,244 g ae ha-1) with and without wiper-applied glyphosate. Glyphosate (575 g ae L-1) was applied at 0, 33, 50, and 75% diluted concentrate with a wiper 24 hrs after 2,4-D was broadcast applied. Injury estimates and shoot biomass did not differ between plants treated with 2,4-D-only or the addition of wiper-applied glyphosate 21 days after treatment. Shoot regrowth biomass of plants treated with 2,4-D-only was approximately 560% greater compared to nontreated plants three months after treatment. Plants treated with wiper-applied glyphosate had shoot regrowth biomass of less than 10% compared to the nontreated plants 3 months after treatment. Root biomass of 2,4-D-only treated plants (160% of nontreated plants) followed a similar pattern of shoot regrowth biomass. Root biomass of plants treated with wiper-applied glyphosate exhibited approximately 50% reductions compared to nontreated plants. The concentrations of glyphosate tested reduced all vegetative metrics equally; therefore, all labeled concentrations should be effective. The results of the experiment show that broadcast-applied 2,4-D is more effective at reducing leafy spurge biomass with the addition of wiper-applied glyphosate.
Vaccines have revolutionised the field of medicine, eradicating and controlling many diseases. Recent pandemic vaccine successes have highlighted the accelerated pace of vaccine development and deployment. Leveraging this momentum, attention has shifted to cancer vaccines and personalised cancer vaccines, aimed at targeting individual tumour-specific abnormalities. The UK, now regarded for its vaccine capabilities, is an ideal nation for pioneering cancer vaccine trials. This article convened experts to share insights and approaches to navigate the challenges of cancer vaccine development with personalised or precision cancer vaccines, as well as fixed vaccines. Emphasising partnership and proactive strategies, this article outlines the ambition to harness national and local system capabilities in the UK; to work in collaboration with potential pharmaceutic partners; and to seize the opportunity to deliver the pace for rapid advances in cancer vaccine technology.
Garbarino et al. (J Econ Sci Assoc. https://doi.org/10.1007/s40881-018-0055-4, 2018) describe a new method to calculate the probability distribution of the proportion of lies told in “coin flip” style experiments. I show that their estimates and confidence intervals are flawed. I demonstrate two better ways to estimate the probability distribution of what we really care about—the proportion of liars—and I provide R software to do this.
Several methods used to examine differential item functioning (DIF) in Patient-Reported Outcomes Measurement Information System (PROMIS®) measures are presented, including effect size estimation. A summary of factors that may affect DIF detection and challenges encountered in PROMIS DIF analyses, e.g., anchor item selection, is provided. An issue in PROMIS was the potential for inadequately modeled multidimensionality to result in false DIF detection. Section 1 is a presentation of the unidimensional models used by most PROMIS investigators for DIF detection, as well as their multidimensional expansions. Section 2 is an illustration that builds on previous unidimensional analyses of depression and anxiety short-forms to examine DIF detection using a multidimensional item response theory (MIRT) model. The Item Response Theory-Log-likelihood Ratio Test (IRT-LRT) method was used for a real data illustration with gender as the grouping variable. The IRT-LRT DIF detection method is a flexible approach to handle group differences in trait distributions, known as impact in the DIF literature, and was studied with both real data and in simulations to compare the performance of the IRT-LRT method within the unidimensional IRT (UIRT) and MIRT contexts. Additionally, different effect size measures were compared for the data presented in Section 2. A finding from the real data illustration was that using the IRT-LRT method within a MIRT context resulted in more flagged items as compared to using the IRT-LRT method within a UIRT context. The simulations provided some evidence that while unidimensional and multidimensional approaches were similar in terms of Type I error rates, power for DIF detection was greater for the multidimensional approach. Effect size measures presented in Section 1 and applied in Section 2 varied in terms of estimation methods, choice of density function, methods of equating, and anchor item selection. Despite these differences, there was considerable consistency in results, especially for the items showing the largest values. Future work is needed to examine DIF detection in the context of polytomous, multidimensional data. PROMIS standards included incorporation of effect size measures in determining salient DIF. Integrated methods for examining effect size measures in the context of IRT-based DIF detection procedures are still in early stages of development.
Galba truncatula is one of the most distributed intermediate hosts of Fasciola hepatica across Europe, North Africa and South America. Therefore, understanding the environmental preferences of this species is vital for developing control strategies for fascioliasis and other trematodes such as Calicophoron daubneyi. This systematic literature review evaluates the current understanding of the snail's environmental preferences to identify factors which might aid control and areas where further research is needed. Searches were conducted using Google Scholar and PubMed and included papers published up to August 2023. After filtration, 198 papers with data from 64 countries were evaluated, and data regarding habitat type and habitat pH were noted, along with any other information pertaining to the snail's environmental preferences. The results show that G. truncatula can survive in a diverse range of climates and habitats, generally favours shallow slow-moving water or moist bare mud surfaces, temperatures between 10 and 25°C and was found in habitats with a water pH ranging from 5.0 to 9.4. However, there is limited understanding of the impact of several factors, such as the true optimum pH and temperature preferences within the respective tolerance limits or the reason for the snail's apparent aversion to peatland. Further research is needed to clarify the impact of biotic and abiotic factors on the snail to create robust risk assessments of fluke infection and assess opportunities for environmental control strategies, and for predicting how the snail and fluke transmission may be impacted by climate change.
Early intervention in psychosis (EIP) services improve outcomes for young people, but approximately 30% disengage.
Aims
To test whether a new motivational engagement intervention would prolong engagement and whether it was cost-effective.
Method
We conducted a multicentre, single-blind, parallel-group, cluster randomised controlled trial involving 20 EIP teams at five UK National Health Service (NHS) sites. Teams were randomised using permuted blocks stratified by NHS trust. Participants were all young people (aged 14–35 years) presenting with a first episode of psychosis between May 2019 and July 2020 (N = 1027). We compared the novel Early Youth Engagement (EYE-2) intervention plus standardised EIP (sEIP) with sEIP alone. The primary outcome was time to disengagement over 12–26 months. Economic outcomes were mental health costs, societal costs and socio-occupational outcomes over 12 months. Assessors were masked to treatment allocation for primary disengagement and cost-effectiveness outcomes. Analysis followed intention-to-treat principles. The trial was registered at ISRCTN51629746.
Results
Disengagement was low at 15.9% overall in standardised stand-alone services. The adjusted hazard ratio for EYE-2 + sEIP (n = 652) versus sEIP alone (n = 375) was 1.07 (95% CI 0.76–1.49; P = 0.713). The health economic evaluation indicated lower mental healthcare costs linked to reductions in unplanned mental healthcare with no compromise of clinical outcomes, as well as some evidence for lower societal costs and more days in education, training, employment and stable accommodation in the EYE-2 group.
Conclusions
We found no evidence that EYE-2 increased time to disengagement, but there was some evidence for its cost-effectiveness. This is the largest study to date reporting positive engagement, health and cost outcomes in a total EIP population sample. Limitations included high loss to follow-up for secondary outcomes and low completion of societal and socio-occupational data. COVID-19 affected fidelity and implementation. Future engagement research should target engagement to those in greatest need, including in-patients and those with socio-occupational goals.
This editorial considers the value and nature of academic psychiatry by asking what defines the specialty and psychiatrists as academics. We frame academic psychiatry as a way of thinking that benefits clinical services and discuss how to inspire the next generation of academics.
Hypertension and depression are increasingly common noncommunicable diseases in Ghana and worldwide, yet both are poorly controlled. We sought to understand how healthcare workers in rural Ghana conceptualize the interaction between hypertension and depression, and how care for these two conditions might best be integrated. We conducted a qualitative descriptive study involving in-depth interviews with 34 healthcare workers in the Kassena-Nankana districts of the Upper East Region of Ghana. We used conventional content analysis to systematically review interview transcripts, code the data content and analyze codes for salient themes. Respondents detailed three discrete conceptual models. Most emphasized depression as causing hypertension: through both emotional distress and unhealthy behavior. Others posited a bidirectional relationship, where cardiovascular morbidity worsened mood, or described a single set of underlying causes for both conditions. Nearly all proposed health interventions targeted their favored root cause of these disorders. In this representative rural Ghanaian community, healthcare workers widely agreed that cardiovascular disease and mental illness are physiologically linked and warrant an integrated care response, but held diverse views regarding precisely how and why. There was widespread support for a single primary care intervention to treat both conditions through counseling and medication.
Tidal flooding occurs when coastal water levels exceed impact-based flood thresholds due to tides alone, under average weather conditions. Transitions to tidal flood regimes are already underway for nuisance flood severities in harbours and bays and expected for higher severities in coming decades. In the first such regional assessment, we show that the same transition to tidally forced floods can also be expected to occur in Australian estuaries with less than 0.1 m further sea-level rise. Flood thresholds that historically used to only be exceeded under the combined effects of riverine (freshwater) and coastal (salt water) influences will then occur due to high tides alone. Once this tidal flooding emerges, it is projected to become chronic within two decades. Locations most at-risk of the emergence of tidal flooding and subsequent establishment of chronic flood regimes are those just inside estuary entrances. These locations are exemplified by low freeboard, the vertical distance between a flood threshold and a typical high tide level. We use a freeboard-based analysis to estimate the sea-level rise required for impacts associated with official flood thresholds to occur due to tides alone. The resultant tide-only flood frequency estimates provide a lower bound for future flood rates.
There are a number of levers which affect a state's ability to achieve its foreign policy objectives. These include: economic and trade power; defence capability; diplomatic resource and skill; soft power, including cultural clout; and overseas development policy. Any serious foreign policy strategy is informed by an assessment of the state's capacity in these areas and the effective deployment of its comparative strengths. Significantly, each of these tools is reliant, to varying degrees, on a functional system of international law.
While the relative size of its economy and defence capabilities have reduced in the post- Second World War period, the UK has managed to exert an outsized influence through an effective diplomatic service, a seat at key international tables including the United Nations (UN) Security Council, impressive soft power, a commitment to international development and, above all, a significant role in the shaping and reshaping international law.
Indeed, since 1945, the UK has been central to the moulding of institutions, agreements and norms that govern much international activity (often referred to as the ‘rules- based international order’). The UK was present at the establishment of many key intragovernmental bodies, including the UN, International Monetary Fund and International Criminal Court, and at the drafting of treaties that have been essential in regulating state conduct, such as the Universal Declaration of Human Rights, the 1951 Refugee Convention or, in a European context, the European Convention on Human Rights.
The UK has been able to maintain an outsized diplomatic role in large part because of its commitment to the rules- based international order (arguably with a few significant lapses). However, at precisely the moment the UK should, once again, be helping to defend and update this framework – which is central to international peace and security and its own interests – the Conservatives are causing significant damage to the country's international reputation. Repeated threats to break international law over immigration policy and the trading arrangements for Northern Ireland have been noticed by our international partners, and our adversaries.
Background: The Clinical & Laboratory Standards Institute (CLSI) recommends use of annual antibiograms to help guide empiric antibiotic therapy. Because CLSI periodically updates minimum inhibitory concentration (MIC) breakpoints, we assessed the impact of these updates on longitudinal trends in antibiotic susceptibility rates for Escherichia coli and Klebsiella pneumoniae at a single academic medical center in Atlanta, GA. Methods: Susceptibilities for cefepime, ceftazidime, and levofloxacin in E. coli and K. pneumoniae were extracted from hospital antibiograms from 1988 to 2022. Starting in 1995, intensive care units (ICUs) and wards had separate annual antibiograms, which we combined using weighted averages to create annual overall hospital antibiograms. After summarizing the frequency of isolates tested and susceptibilities using medians and interquartile ranges (IQR), we conducted an interrupted time series analysis using linear segmented regression models, to evaluate the level changes and trends in susceptibility, before and after CLSI MIC breakpoints were updated for ceftazidime (2010 and 2017), cefepime (2014 and 2017), and levofloxacin (2013). Results: Among 21,214 E. coli, there was a median of 291 [IQR: 104, 555] isolates tested annually. Similarly, among 8,686 K. pneumoniae isolates, the median was 125 per year (IQR: 76, 178). Prior to the MIC breakpoint changes, baseline susceptibility trends of both organisms to all 3 antibiotics significantly declined at a rate between 0.2% to 2.4% per year (Table 1). For cefepime (Figure 1), susceptibility decreased annually during 1988 – 2013 for both E. coli (-0.5%) and K. pneumoniae (-1.2%). There were no significant level changes but there were trend changes after 2018, for E. coli (+2.1%) and K. pneumoniae (– 5.5%). For ceftazidime (Figure 2), significant level changes occurred after 2010 for both organisms (E. coli: -5.7%; K. pneumoniae: -5.2%). For levofloxacin (Figure 3), the breakpoint update in 2013 lead to significant level change in susceptibility (E. coli: +8.4%; K. pneumoniae: +11.4%). Conclusion: Overall, we observed a consistent decrease in antibiotic susceptibility in E. coli and K. pneumoniae over three decades, with immediate increases in the level change of susceptibility when MIC breakpoints were changed, followed by a decreasing trend. These findings highlight the importance of longitudinal surveillance and MIC breakpoint changes to inform antimicrobial stewardship strategies.
Military Servicemembers and Veterans are at elevated risk for suicide, but rarely self-identify to their leaders or clinicians regarding their experience of suicidal thoughts. We developed an algorithm to identify posts containing suicide-related content on a military-specific social media platform.
Methods
Publicly-shared social media posts (n = 8449) from a military-specific social media platform were reviewed and labeled by our team for the presence/absence of suicidal thoughts and behaviors and used to train several machine learning models to identify such posts.
Results
The best performing model was a deep learning (RoBERTa) model that incorporated post text and metadata and detected the presence of suicidal posts with relatively high sensitivity (0.85), specificity (0.96), precision (0.64), F1 score (0.73), and an area under the precision-recall curve of 0.84. Compared to non-suicidal posts, suicidal posts were more likely to contain explicit mentions of suicide, descriptions of risk factors (e.g. depression, PTSD) and help-seeking, and first-person singular pronouns.
Conclusions
Our results demonstrate the feasibility and potential promise of using social media posts to identify at-risk Servicemembers and Veterans. Future work will use this approach to deliver targeted interventions to social media users at risk for suicide.
A clinical tool to estimate the risk of treatment-resistant schizophrenia (TRS) in people with first-episode psychosis (FEP) would inform early detection of TRS and overcome the delay of up to 5 years in starting TRS medication.
Aims
To develop and evaluate a model that could predict the risk of TRS in routine clinical practice.
Method
We used data from two UK-based FEP cohorts (GAP and AESOP-10) to develop and internally validate a prognostic model that supports identification of patients at high-risk of TRS soon after FEP diagnosis. Using sociodemographic and clinical predictors, a model for predicting risk of TRS was developed based on penalised logistic regression, with missing data handled using multiple imputation. Internal validation was undertaken via bootstrapping, obtaining optimism-adjusted estimates of the model's performance. Interviews and focus groups with clinicians were conducted to establish clinically relevant risk thresholds and understand the acceptability and perceived utility of the model.
Results
We included seven factors in the prediction model that are predominantly assessed in clinical practice in patients with FEP. The model predicted treatment resistance among the 1081 patients with reasonable accuracy; the model's C-statistic was 0.727 (95% CI 0.723–0.732) prior to shrinkage and 0.687 after adjustment for optimism. Calibration was good (expected/observed ratio: 0.999; calibration-in-the-large: 0.000584) after adjustment for optimism.
Conclusions
We developed and internally validated a prediction model with reasonably good predictive metrics. Clinicians, patients and carers were involved in the development process. External validation of the tool is needed followed by co-design methodology to support implementation in early intervention services.
The COVID-19 has had major direct (e.g., deaths) and indirect (e.g., social inequities) effects in the United States. While the public health response to the epidemic featured some important successes (e.g., universal masking ,and rapid development and approval of vaccines and therapeutics), there were systemic failures (e.g., inadequate public health infrastructure) that overshadowed these successes. Key deficiency in the U.S. response were shortages of personal protective equipment (PPE) and supply chain deficiencies. Recommendations are provided for mitigating supply shortages and supply chain failures in healthcare settings in future pandemics. Some key recommendations for preventing shortages of essential components of infection control and prevention include increasing the stockpile of PPE in the U.S. National Strategic Stockpile, increased transparency of the Stockpile, invoking the Defense Production Act at an early stage, and rapid review and authorization by FDA/EPA/OSHA of non-U.S. approved products. Recommendations are also provided for mitigating shortages of diagnostic testing, medications and medical equipment.
Throughout the COVID-19 pandemic, many areas in the United States experienced healthcare personnel (HCP) shortages tied to a variety of factors. Infection prevention programs, in particular, faced increasing workload demands with little opportunity to delegate tasks to others without specific infectious diseases or infection control expertise. Shortages of clinicians providing inpatient care to critically ill patients during the early phase of the pandemic were multifactorial, largely attributed to increasing demands on hospitals to provide care to patients hospitalized with COVID-19 and furloughs.1 HCP shortages and challenges during later surges, including the Omicron variant-associated surges, were largely attributed to HCP infections and associated work restrictions during isolation periods and the need to care for family members, particularly children, with COVID-19. Additionally, the detrimental physical and mental health impact of COVID-19 on HCP has led to attrition, which further exacerbates shortages.2 Demands increased in post-acute and long-term care (PALTC) settings, which already faced critical staffing challenges difficulty with recruitment, and high rates of turnover. Although individual healthcare organizations and state and federal governments have taken actions to mitigate recurring shortages, additional work and innovation are needed to develop longer-term solutions to improve healthcare workforce resiliency. The critical role of those with specialized training in infection prevention, including healthcare epidemiologists, was well-demonstrated in pandemic preparedness and response. The COVID-19 pandemic underscored the need to support growth in these fields.3 This commentary outlines the need to develop the US healthcare workforce in preparation for future pandemics.
Throughout history, pandemics and their aftereffects have spurred society to make substantial improvements in healthcare. After the Black Death in 14th century Europe, changes were made to elevate standards of care and nutrition that resulted in improved life expectancy.1 The 1918 influenza pandemic spurred a movement that emphasized public health surveillance and detection of future outbreaks and eventually led to the creation of the World Health Organization Global Influenza Surveillance Network.2 In the present, the COVID-19 pandemic exposed many of the pre-existing problems within the US healthcare system, which included (1) a lack of capacity to manage a large influx of contagious patients while simultaneously maintaining routine and emergency care to non-COVID patients; (2) a “just in time” supply network that led to shortages and competition among hospitals, nursing homes, and other care sites for essential supplies; and (3) longstanding inequities in the distribution of healthcare and the healthcare workforce. The decades-long shift from domestic manufacturing to a reliance on global supply chains has compounded ongoing gaps in preparedness for supplies such as personal protective equipment and ventilators. Inequities in racial and socioeconomic outcomes highlighted during the pandemic have accelerated the call to focus on diversity, equity, and inclusion (DEI) within our communities. The pandemic accelerated cooperation between government entities and the healthcare system, resulting in swift implementation of mitigation measures, new therapies and vaccinations at unprecedented speeds, despite our fragmented healthcare delivery system and political divisions. Still, widespread misinformation or disinformation and political divisions contributed to eroded trust in the public health system and prevented an even uptake of mitigation measures, vaccines and therapeutics, impeding our ability to contain the spread of the virus in this country.3 Ultimately, the lessons of COVID-19 illustrate the need to better prepare for the next pandemic. Rising microbial resistance, emerging and re-emerging pathogens, increased globalization, an aging population, and climate change are all factors that increase the likelihood of another pandemic.4
Knowledge of sex differences in risk factors for posttraumatic stress disorder (PTSD) can contribute to the development of refined preventive interventions. Therefore, the aim of this study was to examine if women and men differ in their vulnerability to risk factors for PTSD.
Methods
As part of the longitudinal AURORA study, 2924 patients seeking emergency department (ED) treatment in the acute aftermath of trauma provided self-report assessments of pre- peri- and post-traumatic risk factors, as well as 3-month PTSD severity. We systematically examined sex-dependent effects of 16 risk factors that have previously been hypothesized to show different associations with PTSD severity in women and men.
Results
Women reported higher PTSD severity at 3-months post-trauma. Z-score comparisons indicated that for five of the 16 examined risk factors the association with 3-month PTSD severity was stronger in men than in women. In multivariable models, interaction effects with sex were observed for pre-traumatic anxiety symptoms, and acute dissociative symptoms; both showed stronger associations with PTSD in men than in women. Subgroup analyses suggested trauma type-conditional effects.
Conclusions
Our findings indicate mechanisms to which men might be particularly vulnerable, demonstrating that known PTSD risk factors might behave differently in women and men. Analyses did not identify any risk factors to which women were more vulnerable than men, pointing toward further mechanisms to explain women's higher PTSD risk. Our study illustrates the need for a more systematic examination of sex differences in contributors to PTSD severity after trauma, which may inform refined preventive interventions.
Marine litter poses a complex challenge in Indonesia, necessitating a well-informed and coordinated strategy for effective mitigation. This study investigates the seasonality of plastic concentrations around Sulawesi Island in central Indonesia during monsoon-driven wet and dry seasons. By using open data and methodologies including the HYCOM and Parcels models, we simulated the dispersal of plastic waste over 3 months during both the southwest and northeast monsoons. Our research extended beyond data analysis, as we actively engaged with local communities, researchers and policymakers through a range of outreach initiatives, including the development of a web application to visualize model results. Our findings underscore the substantial influence of monsoon-driven currents on surface plastic concentrations, highlighting the seasonal variation in the risk to different regional seas. This study adds to the evidence provided by coarser resolution regional ocean modelling studies, emphasizing that seasonality is a key driver of plastic pollution within the Indonesian archipelago. Inclusive international collaboration and a community-oriented approach were integral to our project, and we recommend that future initiatives similarly engage researchers, local communities and decision-makers in marine litter modelling results. This study aims to support the application of model results in solutions to the marine litter problem.