We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In low- and middle-income countries, fewer than 1 in 10 people with mental health conditions are estimated to be accurately diagnosed in primary care. This is despite more than 90 countries providing mental health training for primary healthcare workers in the past two decades. The lack of accurate diagnoses is a major bottleneck to reducing the global mental health treatment gap. In this commentary, we argue that current research practices are insufficient to generate the evidence needed to improve diagnostic accuracy. Research studies commonly determine accurate diagnosis by relying on self-report tools such as the Patient Health Questionnaire-9. This is problematic because self-report tools often overestimate prevalence, primarily due to their high rates of false positives. Moreover, nearly all studies on detection focus solely on depression, not taking into account the spectrum of conditions on which primary healthcare workers are being trained. Single condition self-report tools fail to discriminate among different types of mental health conditions, leading to a heterogeneous group of conditions masked under a single scale. As an alternative path forward, we propose improving research on diagnostic accuracy to better evaluate the reach of mental health service delivery in primary care. We recommend evaluating multiple conditions, statistically adjusting prevalence estimates generated from self-report tools, and consistently using structured clinical interviews as a gold standard. We propose clinically meaningful detection as ‘good-enough’ diagnoses incorporating multiple conditions accounting for context, health system and types of interventions available. Clinically meaningful identification can be operationalized differently across settings based on what level of diagnostic specificity is needed to select from available treatments. Rethinking research strategies to evaluate accuracy of diagnosis is vital to improve training, supervision and delivery of mental health services around the world.
Carbapenem-resistant Enterobacterales (CRE) are an urgent threat to healthcare, but the epidemiology of these antimicrobial-resistant organisms may be evolving in some settings since the COVID-19 pandemic. An updated analysis of hospital-acquired CRE (HA-CRE) incidence in community hospitals is needed.
Methods:
We retrospectively analyzed data on HA-CRE cases and antimicrobial utilization (AU) from two community hospital networks, the Duke Infection Control Outreach Network (DICON) and the Duke Antimicrobial Stewardship Outreach Network (DASON) from January 2013 to June 2023. The zero-inflated negative binomial regression model was used owing to excess zeros.
Results:
126 HA-CRE cases from 36 hospitals were included in the longitudinal analysis. The pooled incidence of HA CRE was 0.69 per 100,000 patient days (95% confidence interval [95% CI], 0.57–0.82 HA-CRE rate significantly decreased over time before COVID-19 (rate ratio [RR], 0.94 [95% CI, 0.89–0.99]; p = 0.02), but there was a significant slope change indicating a trend increase in HA-CRE after COVID-19 (RR, 1.32 [95% CI, 1.06–1.66]; p = 0.01). In 21 hospitals participating in both DICON and DASON from January 2018 to June 2023, there was a correlation between HA-CRE rates and AU for CRE treatment (Spearman’s coefficient = 0.176; p < 0.01). Anti-CRE AU did not change over time, and there was no level or slope change after COVID.
Conclusions:
The incidence of HA-CRE decreased before COVID-19 in a network of community hospitals in the southeastern United States, but this trend was disrupted by the COVID-19 pandemic.
This survey of 66 specialist mental health services aimed to provide an up-to-date description of pathways of care and interventions available to children with an intellectual disability referred for behaviours that challenge or with suspected mental health problems.
Results
Overall, 24% of services made contact with a family at referral stage, whereas 29% contacted families at least once during the waiting list phase. Only two in ten services offered any therapeutic input during the referral or waiting list stages. During the active caseload phase, services offered mostly psychoeducation (52–59%), followed by applied behaviour analytic approaches for behaviours that challenge (52%) and cognitive–behavioural therapy (41%). Thirty-six per cent of services had not offered any packaged or named intervention in the past 12 months.
Clinical implications
With increasing waiting times for specialist mental health support, services need to consider increasing the amount of contact and therapeutic input on offer throughout all stages of a child's journey with the service.
To investigate the symptoms of SARS-CoV-2 infection, their dynamics and their discriminatory power for the disease using longitudinally, prospectively collected information reported at the time of their occurrence. We have analysed data from a large phase 3 clinical UK COVID-19 vaccine trial. The alpha variant was the predominant strain. Participants were assessed for SARS-CoV-2 infection via nasal/throat PCR at recruitment, vaccination appointments, and when symptomatic. Statistical techniques were implemented to infer estimates representative of the UK population, accounting for multiple symptomatic episodes associated with one individual. An optimal diagnostic model for SARS-CoV-2 infection was derived. The 4-month prevalence of SARS-CoV-2 was 2.1%; increasing to 19.4% (16.0%–22.7%) in participants reporting loss of appetite and 31.9% (27.1%–36.8%) in those with anosmia/ageusia. The model identified anosmia and/or ageusia, fever, congestion, and cough to be significantly associated with SARS-CoV-2 infection. Symptoms’ dynamics were vastly different in the two groups; after a slow start peaking later and lasting longer in PCR+ participants, whilst exhibiting a consistent decline in PCR- participants, with, on average, fewer than 3 days of symptoms reported. Anosmia/ageusia peaked late in confirmed SARS-CoV-2 infection (day 12), indicating a low discrimination power for early disease diagnosis.
Mild traumatic brain injury (mTBI), depression, and posttraumatic stress disorder (PTSD) are a notable triad in Operation Enduring Freedom, Operation Iraqi Freedom, and Operation New Dawn (OEF/OIF/OND) Veterans. With the comorbidity of depression and PTSD in Veterans with mTBI histories, and their role in exacerbating cognitive and emotional dysfunction, interventions addressing cognitive and psychiatric functioning are critical. Compensatory Cognitive Training (CCT) is associated with improvements in areas such as prospective memory, attention, and executive functioning and has also yielded small-to-medium treatment effects on PTSD and depressive symptom severity. Identifying predictors of psychiatric symptom change following CCT would further inform the interventional approach. We sought to examine neuropsychological predictors of PTSD and depressive symptom improvement in Veterans with a history of mTBI who received CCT.
Participants and Methods:
37 OEF/OIF/OND Veterans with mTBI history and cognitive complaints received 10-weekly 120-minute CCT group sessions as part of a clinical trial. Participants completed a baseline neuropsychological assessment including tests of premorbid functioning, attention/working memory, processing speed, verbal learning/memory, and executive functioning, and completed psychiatric symptom measures (PTSD Checklist-Military Version; Beck Depression Inventory-II) at baseline, post-treatment, and 5-week follow-up. Paired samples t-tests were used to examine statistically significant change in PTSD (total and symptom cluster scores) and depressive symptom scores over time. Pearson correlations were calculated between neuropsychological scores and PTSD and depressive symptom change scores at post-treatment and follow-up. Neuropsychological measures identified as significantly correlated with psychiatric symptom change scores (p^.05) were entered as independent variables in separate multiple linear regression analyses to predict symptom change at post-treatment and follow-up.
Results:
Over 50% of CCT participants had clinically meaningful improvement in depressive symptoms (>17.5% score reduction) and over 20% had clinically meaningful improvement in PTSD symptoms (>10-point improvement) at post-treatment and follow-up. Examination of PTSD symptom cluster scores (re-experiencing, avoidance/numbing, and arousal) revealed a statistically significant improvement in avoidance/numbing at follow-up. Bivariate correlations indicated that worse baseline performance on D-KEFS Category Fluency was moderately associated with PTSD symptom improvement at post-treatment. Worse performance on both D-KEFS Category Fluency and Category Switching Accuracy was associated with improvement in depressive symptoms at post-treatment and follow-up. Worse performance on D-KEFS Trail Making Test Switching was associated with improvement in depressive symptoms at follow-up. Subsequent regression analyses revealed worse processing speed and worse aspects of executive functioning at baseline significantly predicted depressive symptom improvement at post-treatment and follow-up.
Conclusions:
Worse baseline performances on tests of processing speed and aspects of executive functioning were significantly associated with improvements in PTSD and depressive symptoms during the trial. Our results suggest that cognitive training may bolster skills that are helpful for PTSD and depressive symptom reduction and that those with worse baseline functioning may benefit more from treatment because they have more room to improve. Although CCT is not a primary treatment for PTSD or depressive symptoms, our results support consideration of including CCT in hybrid treatment approaches. Further research should examine these relationships in larger samples.
Little is known about the skills involved in clinical formulation. The individual case formulation (ICF) approach, based on functional analysis, employs clinical descriptions that are theory-free and depicts formulations constructed according to a set of basic conventions.
Aims:
We report a test of whether this method could be taught and if the quality of the resulting diagrams could be reliably rated.
Method:
Participants (n=40) participated in a training course in formulation. A draft rating scale was refined in the course of rating formulation diagrams and basic inter-rater reliability established.
Results:
Results of the study support further development of the ICF approach.
Background:Burkholderia multivorans are gram-negative bacteria typically found in water and soil. B. multivorans outbreaks among patients without cystic fibrosis have been associated with exposure to contaminated medical devices or nonsterile aqueous products. Acquisition can also occur from exposure to environmental reservoirs like sinks or other hospital water sources. We describe an outbreak of B. multivorans among hospitalized patients without cystic fibrosis at 2 hospitals within the same healthcare system in California (hospitals A and B) between August 2021 and July 2022. Methods: We defined confirmed case patients as patients without cystic fibrosis hospitalized at hospital A or hospital B between January 2020 to July 2022 with B. multivorans isolated from any body site matching the outbreak strain. We reviewed medical records to describe case patients and to identify common exposures. We evaluated infection control practices and interviewed staff to detect exposures to nonsterile water. Select samples from water, ice, drains, and sink splash zone surfaces were collected and cultured for B. multivorans in March 2022 and July 2022 from both hospitals. Common aqueous products used among case patients were tested for B. multivorans. Genetic relatedness between clinical and environmental samples was determined using random amplified polymorphic DNA (RAPD) and repetitive extragenic palindromic polymerase chain reaction (Rep-PCR). Results: We identified 23 confirmed case patients; 20 (87%) of these were identified at an intensive care unit (ICU) in hospital A. B. multivorans was isolated from respiratory sources in 18 cases (78%). We observed medication preparation items, gloves, and patient care items stored within sink splash zones in ICU medication preparation rooms and patient rooms. Nonsterile water and ice were used for bed baths, swallow evaluations, and ice packs. B. multivorans was cultured from ice and water dispensed from an 11-year-old ice machine in the ICU at hospital A in March 2022 but no other water sources. Additional testing in July 2022 yielded B. multivorans from ice and a drain pan from a new ice machine in the same ICU location at hospital A. All products were negative. Clinical and environmental isolates were the same strain by RAPD and Rep-PCR. Conclusions: The use of nonsterile water and ice from a contaminated ice machine contributed to this outbreak. Water-related fixtures can serve as reservoirs for Burkholderia, posing infection risk to hospitalized and immunocompromised patients. During outbreaks of water-related organisms, such as B. multivorans , nonsterile water and ice use should be investigated as potential sources of transmission and other options should be considered, especially for critically ill patients.
Gestational diabetes is treated with medical nutrition therapy, delivered by healthcare professionals; however, the optimal diet for affected women is unknown. Randomised controlled trials, such as the DiGest (Dietary Intervention in Gestational Diabetes) trial, will address this knowledge gap, but the acceptability of whole-diet interventions in pregnancy is unclear. Whole-diet approaches reduce bias but require high levels of participant commitment and long intervention periods to generate meaningful clinical outcomes. We aimed to assess healthcare professionals’ views on the acceptability of the DiGest dietbox intervention for women with gestational diabetes and to identify any barriers to adherence which could be addressed to support good recruitment and retention to the DiGest trial. Female healthcare professionals (n 16) were randomly allocated to receive a DiGest dietbox containing 1200 or 2000 kcal/d including at least one weeks’ food. A semi-structured interview was conducted to explore participants’ experience of the intervention. Interviews were audio-recorded, transcribed verbatim and analysed thematically using NVivo software. Based on the findings of qualitative interviews, modifications were made to the dietboxes. Participants found the dietboxes convenient and enjoyed the variety and taste of the meals. Factors which facilitated adherence included participants having a good understanding of study aims and sufficient organisational skills to facilitate weekly meal planning in advance. Barriers to adherence included peer pressure during social occasions and feelings of deprivation or hunger (affecting both standard and reduced calorie groups). Healthcare professionals considered random allocation to a whole-diet replacement intervention to be acceptable and feasible in a clinical environment and offered benefits to participants including convenience.
Individuals with psychosis have poor oral health compared with the general population. The interaction between oral health and psychosis is likely to be complex and have important ramifications for improving dental and mental health outcomes. However, this relationship is poorly understood and rarely studied using qualitative methods.
Aims
To explore patient perspectives on the relationship between oral health and psychosis.
Method
The authors recruited 19 people with experiences of psychosis from community mental health teams, early intervention in psychosis services, and rehabilitation units. Participants completed a qualitative interview. Transcripts were analysed with reflexive thematic analysis.
Results
The analysis resulted in three themes: theme 1, psychosis creates barriers to good oral health, including a detachment from reality, the threat of unusual experiences and increased use of substances; theme 2, the effects of poor oral health in psychosis, with ramifications for self-identify and social relationships; and theme 3, systems for psychosis influence oral health, with central roles for formal and informal support networks.
Conclusions
Psychosis was perceived to affect adherence to oral health self-care behaviours and overall oral health. Poor oral health negatively affected self-identity and social relationships. Clinical implications include a systemic approach to provide early intervention and prevention of the sequelae of dental disease, which lead to tooth loss and impaired oral function and aesthetics, which in turn affect mental health. Participants felt that mental health services play an important role in supporting people with oral health.
Internationally, an increasing proportion of emergency department visits are mental health related. Concurrently, psychiatric wards are often occupied above capacity. Healthcare providers have introduced short-stay, hospital-based crisis units offering a therapeutic space for stabilisation, assessment and appropriate referral. Research lags behind roll-out, and a review of the evidence is urgently needed to inform policy and further introduction of similar units.
Aims
This systematic review aims to evaluate the effectiveness of short-stay, hospital-based mental health crisis units.
Method
We searched EMBASE, Medline, CINAHL and PsycINFO up to March 2021. All designs incorporating a control or comparison group were eligible for inclusion, and all effect estimates with a comparison group were extracted and combined meta-analytically where appropriate. We assessed study risk of bias with Risk of Bias in Non-Randomized Studies – of Interventions and Risk of Bias in Randomized Trials.
Results
Data from twelve studies across six countries (Australia, Belgium, Canada, The Netherlands, UK and USA) and 67 505 participants were included. Data indicated that units delivered benefits on many outcomes. Units could reduce psychiatric holds (42% after intervention compared with 49.8% before intervention; difference = 7.8%; P < 0.0001) and increase out-patient follow-up care (χ2 = 37.42, d.f. = 1; P < 0.001). Meta-analysis indicated a significant reduction in length of emergency department stay (by 164.24 min; 95% CI −261.24 to −67.23 min; P < 0.001) and number of in-patient admissions (odds ratio 0.55, 95% CI 0.43–0.68; P < 0.001).
Conclusions
Short-stay mental health crisis units are effective for reducing emergency department wait times and in-patient admissions. Further research should investigate the impact of units on patient experience, and clinical and social outcomes.
The Mesoproterozoic is an important era for the development of eukaryotic organisms in oceans. The earliest unambiguous eukaryotic microfossils are reported in late Paleoproterozoic shales from China and Australia. During the Mesoproterozoic, eukaryotes diversified in taxonomy, metabolism, and ecology, with the advent of eukaryotic photosynthesis, osmotrophy, multicellularity, and predation. Despite these biological innovations, their fossil record is scarce before the late Mesoproterozoic. Here, we document an assemblage of organic-walled microfossils from the 1590–1270 Ma Dismal Lakes Group in Canada. The assemblage comprises 25 taxa, including 11 morphospecies identified as eukaryotes, a relatively high diversity for this period. We also report one new species, Dictyosphaera smaugi new species, and one unnamed taxon. The diversity of eukaryotic forms in this succession is comparable to slightly older assemblages from China and is higher than worldwide contemporaneous assemblages and supports the hypothesis of an earlier diversification of eukaryotes in the Mesoproterozoic.
Ecosystems across the globe are vulnerable to the effects of climate change, as are the communities that depend on them. However, ecosystems can also protect people from climate change impacts. As the evidence base strengthens, nature-based solutions (NbS) are increasingly prominent in climate change policy, especially in developing nations. Yet intentions rarely translate into measurable, evidence-based targets. As Paris Agreement signatories revise their Nationally Determined Contributions, we argue that NbS are key to meeting global goals for climate and biodiversity, and we urge researchers to work more closely with policy-makers to identify targets that benefit both people and ecosystems.
Data related to brain function may have the potential to improve the reliability and validity of assessments for the aetiologically and clinically heterogeneous syndrome of attention-deficit/hyperactivity disorder (ADHD). This study investigated associations between questionnaire assessments of behavioural features of adults with ADHD and an aspect of neurocognitive performance which has been reported to be impaired in adults with ADHD.
Methods
Fifty-nine adult patients with a DSM-IV diagnosis of ADHD, and their informants, completed questionnaires related to aspects of severity of ADHD. Associations were examined between questionnaire ratings and performance on a computer-administered task of spatial working memory (SWM).
Results
Correlations between ratings of ADHD and SWM indicated moderate but significant correlations for patients' ratings, but not for informants' ratings. Also, patients who reported a past history of ‘self-harm’ (N = 33) had a significantly worse mean performance on both measures of SWM (p = 0.004, 0.003).
Conclusions
The results indicate that aspects of impulsivity, i.e. self-ratings of ‘emotive’ behaviour (involving rapid response to stimuli and marked reactivity of mood) and of past ‘self-harm’, show relatively strong associations with SWM performance in adults selected on the basis of an ADHD diagnosis. A profile of neurocognitive performances may have a role in the assessment of ADHD.
We present the analysis of global sympagic primary production (PP) from 300 years of pre-industrial and historical simulations of the E3SMv1.1-BGC model. The model includes a novel, eight-element sea ice biogeochemical component, MPAS-Seaice zbgc, which is resolved in three spatial dimensions and uses a vertical transport scheme based on internal brine dynamics. Modeled ice algal chlorophyll-a concentrations and column-integrated values are broadly consistent with observations, though chl-a profile fractions indicate that upper ice communities of the Southern Ocean are underestimated. Simulations of polar integrated sea ice PP support the lower bound in published estimates for both polar regions with mean Arctic values of 7.5 and 15.5 TgC/a in the Southern Ocean. However, comparisons of the polar climate state with observations, using a maximal bound for ice algal growth rates, suggest that the Arctic lower bound is a significant underestimation driven by biases in ocean surface nitrate, and that correction of these biases supports as much as 60.7 TgC/a of net Arctic PP. Simulated Southern Ocean sympagic PP is predominantly light-limited, and regional patterns, particularly in the coastal high production band, are found to be negatively correlated with snow thickness.
Scientific observations of sea ice began more than a century ago, but detailed sea-ice models appeared only in the latter half of the last century. The high albedo of sea ice is critical for the Earth’s heat balance, and ice motion across the ocean’s surface transports fresh water and salt. The basic components in a complete sea-ice model must include vertical thermodynamics and horizontal dynamics, including a constitutive relation for the ice, advection and deformational processes. This overview surveys topics in sea-ice modeling from the global climate modeling perspective, emphasizing work that significantly advanced the state of the art and highlighting promising new developments.
Objectives: Huntington’s disease (HD) is a debilitating genetic disorder characterized by motor, cognitive and psychiatric abnormalities associated with neuropathological decline. HD pathology is the result of an extended chain of CAG (cytosine, adenine, guanine) trinucleotide repetitions in the HTT gene. Clinical diagnosis of HD requires the presence of an otherwise unexplained extrapyramidal movement disorder in a participant at risk for HD. Over the past 15 years, evidence has shown that cognitive, psychiatric, and subtle motor dysfunction is evident decades before traditional motor diagnosis. This study examines the relationships among subcortical brain volumes and measures of emerging disease phenotype in prodromal HD, before clinical diagnosis. Methods: The dataset includes 34 cognitive, motor, psychiatric, and functional variables and five subcortical brain volumes from 984 prodromal HD individuals enrolled in the PREDICT HD study. Using cluster analyses, seven distinct clusters encompassing cognitive, motor, psychiatric, and functional domains were identified. Individual cluster scores were then regressed against the subcortical brain volumetric measurements. Results: Accounting for site and genetic burden (the interaction of age and CAG repeat length) smaller caudate and putamen volumes were related to clusters reflecting motor symptom severity, cognitive control, and verbal learning. Conclusions: Variable reduction of the HD phenotype using cluster analysis revealed biologically related domains of HD and are suitable for future research with this population. Our cognitive control cluster scores show sensitivity to changes in basal ganglia both within and outside the striatum that may not be captured by examining only motor scores. (JINS, 2017, 23, 159–170)
Large earthquakes can cause population displacement, critical sanitation infrastructure damage, and increased threats to water resources, potentially predisposing populations to waterborne disease epidemics such as cholera.
Problem
The risk of cholera outbreaks after earthquake disasters remains uncertain. A cross-country analysis of World Health Organization (WHO) cholera data that would contribute to this discussion has yet to be published.
Methods
A cross-country longitudinal analysis was conducted among 63 low- and middle-income countries from 1995-2009. The association between earthquake disasters of various effect sizes and a relative spike in cholera rates for a given country was assessed utilizing fixed-effects logistic regression and adjusting for gross domestic product per capita, water and sanitation level, flooding events, percent urbanization, and under-five child mortality. Also, the association between large earthquakes and cholera rate increases of various degrees was assessed.
Results
Forty-eight of the 63 countries had at least one year with reported cholera infections during the 15-year study period. Thirty-six of these 48 countries had at least one earthquake disaster. In adjusted analyses, country-years with ≥10,000 persons affected by an earthquake had 2.26 times increased odds (95 CI, 0.89-5.72, P = .08) of having a greater than average cholera rate that year compared to country-years having <10,000 individuals affected by an earthquake. The association between large earthquake disasters and cholera infections appeared to weaken as higher levels of cholera rate increases were tested.
Conclusion
A trend of increased risk of greater than average cholera rates when more people were affected by an earthquake in a country-year was noted. However these findings did not reach statistical significance at traditional levels and may be due to chance. Frequent large-scale cholera outbreaks after earthquake disasters appeared to be relatively uncommon.
SumnerS, TurnerE, ThielmanN. Association Between Earthquake Events and Cholera Outbreaks: A Cross-country 15-year Longitudinal Analysis. Prehosp Disaster Med. 2013;28(6):1-6.
A well-established provision for mass-casualty decontamination that incorporates the use of mobile showering units has been developed in the UK. The effectiveness of such decontamination procedures will be critical in minimizing or preventing the contamination of emergency responders and hospital infrastructure. The purpose of this study was to evaluate three empirical strategies designed to optimize existing decontamination procedures: (1) instructions in the form of a pictorial aid prior to decontamination; (2) provision of a washcloth within the showering facility; and (3) an extended showering period. The study was a three-factor, between-participants (or “independent”) design with 90 volunteers. The three factors each had two levels: use of washcloths (washcloth/no washcloth), washing instructions (instructions/no instructions), and shower cycle duration (three minutes/six minutes). The effectiveness of these strategies was quantified by whole-body fluorescence imaging following application of a red fluorophore to multiple, discrete areas of the skin. All five showering procedures were relatively effective in removing the fluorophore “contaminant”, but the use of a cloth (in the absence of instructions) led to a significant (∼20%) improvement in the effectiveness of decontamination over the standard protocol (p <0.05). Current mass-casualty decontamination effectiveness, especially in children, can be optimized by the provision of a washcloth. This simple but effective approach indicates the value of performing controlled volunteer trials for optimizing existing decontamination procedures.