We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Converting knowledge from basic research into innovations that improve clinical care requires a specialized workforce that converts a laboratory invention into a product that can be developed and tested for clinical use. As the mandate to demonstrate more real-world impact from the national investment in research continues to grow, the demand for staff that specialize in product development and clinical trials continues to outpace supply. In this study, two academic medical institutions in the greater Houston–Galveston region termed this population the “bridge and clinical research professional” (B + CRP) workforce and assessed its turnover before and after the onset of the COVID-19 pandemic . Both institutions realized growth (1.2 vs 2.3-fold increase) in B + CRP-specific jobs from 2017 to 2022. Turnover increased 1.5–2-fold after the onset of the pandemic but unlike turnover in the larger clinical and translational research academic workforce, the instability did not resolve by 2022. These results are a baseline measurement of the instability of our regional B + CRP workforce and have informed the development of a regional alliance of universities, academic medical centers, and economic development organizations in the greater Houston–Galveston region to increase this highly specialized and skilled candidate pool.
Increasing resources are devoted to osteoarthritis surgical care in Australia annually, with significant expenditure attributed to hip and knee arthroplasties. Safe, efficient, and sustainable models of care are required. This study aimed to determine the impact on healthcare costs of implementing an enhanced short-stay model of care (ESS-MOC) for arthroplasty at a national level.
Methods
Budget impact analysis was conducted for hospitals providing arthroplasty surgery over the years 2023 to 2030. Population-based sample projections obtained from clinical registry and administrative datasets of individuals receiving hip or knee arthroplasty for osteoarthritis were applied. The ESS-MOC assigned 30 percent of eligible patients to a shortened acute-ward-stay pathway and outpatient rehabilitation. The remaining 70 percent received a current practice pathway. The primary outcome was total healthcare cost savings post-implementation of the ESS-MOC, with return on investment (ROI) ratio and hospital bed-days utilized also estimated. Costs are presented in Australian dollars (AUD) and United States dollars (USD), at 2023 prices.
Results
Estimated hospital cost savings for the years 2023 to 2030 from implementing the ESS-MOC were AUD641 million (USD427 million) (95% CI: AUD99 million [USD66 million] to AUD1,250 million) [USD834 million]). This corresponds to a ROI ratio of 8.88 (1.3 to 17.9) dollars returned for each dollar invested in implementing the care model. For the period 2023 to 2030, an estimated 337,000 (261,000 to 412,000) acute surgical ward bed-days, and 721,000 (471,000 to 1,028,000) rehabilitation bed-days could be saved. Total implementation costs for the ESS-MOC were estimated at AUD72 million (USD46 million) over eight years.
Conclusions
Implementation of an ESS-MOC for eligible arthroplasty patients in Australia would generate significant cost and healthcare resource savings. This budget impact analysis demonstrates a best practice approach to comprehensively assessing value, at a national level, of implementing sustainable models of care in high-burden healthcare contexts. Findings are relevant to other settings where hospital stay following joint arthroplasty remains excessively long.
The association between cannabis and psychosis is established, but the role of underlying genetics is unclear. We used data from the EU-GEI case-control study and UK Biobank to examine the independent and combined effect of heavy cannabis use and schizophrenia polygenic risk score (PRS) on risk for psychosis.
Methods
Genome-wide association study summary statistics from the Psychiatric Genomics Consortium and the Genomic Psychiatry Cohort were used to calculate schizophrenia and cannabis use disorder (CUD) PRS for 1098 participants from the EU-GEI study and 143600 from the UK Biobank. Both datasets had information on cannabis use.
Results
In both samples, schizophrenia PRS and cannabis use independently increased risk of psychosis. Schizophrenia PRS was not associated with patterns of cannabis use in the EU-GEI cases or controls or UK Biobank cases. It was associated with lifetime and daily cannabis use among UK Biobank participants without psychosis, but the effect was substantially reduced when CUD PRS was included in the model. In the EU-GEI sample, regular users of high-potency cannabis had the highest odds of being a case independently of schizophrenia PRS (OR daily use high-potency cannabis adjusted for PRS = 5.09, 95% CI 3.08–8.43, p = 3.21 × 10−10). We found no evidence of interaction between schizophrenia PRS and patterns of cannabis use.
Conclusions
Regular use of high-potency cannabis remains a strong predictor of psychotic disorder independently of schizophrenia PRS, which does not seem to be associated with heavy cannabis use. These are important findings at a time of increasing use and potency of cannabis worldwide.
Evaluate system-wide antimicrobial stewardship program (ASP) update impact on intravenous (IV)-to-oral (PO) antimicrobial conversion in select community hospitals through pre- and postimplementation trend analysis.
Methods:
Retrospective study across seven hospitals: region one (four hospitals, 827 beds) with tele-ASP managed by infectious diseases (ID)-trained pharmacists and region two (three hospitals, 498 beds) without. Data were collected pre- (April 2022–September 2022) and postimplementation (April 2023–September 2023) on nine antimicrobials for the IV to PO days of therapy (DOTs). Antimicrobial administration route and (DOTs)/1,000 patient days were extracted from the electronical medical record (EMR). Primary outcome: reduction in IV DOTs/1,000 patient days. Secondary outcomes: decrease in IV usage via PO:total antimicrobial ratios and cost reduction.
Results:
In region one, IV usage decreased from 461 to 209/1,000 patient days (P = < .001), while PO usage increased from 289 to 412/1,000 patient days (P = < .001). Total antimicrobial use decreased from 750 to 621/1,000 patient days (P = < .001). In region two, IV usage decreased from 300 to 243/1,000 patient days (P = .005), and PO usage rose from 154 to 198/1,000 patient days (P = .031). The PO:total antimicrobial ratios increased in both regions, from .42–.52 to .60–.70 in region one and from .36–.55 to .46–.55 in region two. IV cost savings amounted to $19,359.77 in region one and $4,038.51 in region two.
Conclusion:
The ASP intervention improved IV-to-PO conversion rates in both regions, highlighting the contribution of ID-trained pharmacists in enhancing ASP initiatives in region one and suggesting tele-ASP expansion may be beneficial in resource-constrained settings.
Constipation is overrepresented in people with intellectual disabilities. Around 40% of people with intellectual disabilities who died prematurely were prescribed laxatives. A quarter of people with intellectual disabilities are said to be on laxatives. There are concerns that prescribing is not always effective and appropriate. There are currently no prescribing guidelines specific to this population.
Aims
To develop guidelines to support clinicians with their decision-making when prescribing laxatives to people with intellectual disabilities.
Method
A modified Delphi methodology, the RAND/UCLA Appropriateness Method, was used. Step 1 comprised development of a bespoke six-item, open-ended questionnaire from background literature and its external validation. Relevant stakeholders, including a range of clinical experts and experts by experience covering the full range of intellectual disability and constipation, were invited to participate in an expert panel. Panel members completed the questionnaire. Responses were divided into ‘negative consensus’ and ‘positive consensus’. Members were then invited to two panel meetings, 2 weeks apart, held virtually over Microsoft Teams, to build consensus. The expert-by-experience group were included in a separate face-to-face meeting.
Results
A total of 20 people (ten professional experts and ten experts by experience, of whom seven had intellectual disability) took part. There were five main areas of discussion to reach a consensus i.e. importance of diagnosis, the role of prescribing, practicalities of medication administration, importance of reviewing and monitoring, and communication.
Conclusions
Laxative prescribing guidelines were developed by synthesising the knowledge of an expert panel including people with intellectual disabilities with the existing evidence base, to improve patient care.
Various water-based heater-cooler devices (HCDs) have been implicated in nontuberculous mycobacteria outbreaks. Ongoing rigorous surveillance for healthcare-associated M. abscessus (HA-Mab) put in place following a prior institutional outbreak of M. abscessus alerted investigators to a cluster of 3 extrapulmonary M. abscessus infections among patients who had undergone cardiothoracic surgery.
Methods:
Investigators convened a multidisciplinary team and launched a comprehensive investigation to identify potential sources of M. abscessus in the healthcare setting. Adherence to tap water avoidance protocols during patient care and HCD cleaning, disinfection, and maintenance practices were reviewed. Relevant environmental samples were obtained. Patient and environmental M. abscessus isolates were compared using multilocus-sequence typing and pulsed-field gel electrophoresis. Smoke testing was performed to evaluate the potential for aerosol generation and dispersion during HCD use. The entire HCD fleet was replaced to mitigate continued transmission.
Results:
Clinical presentations of case patients and epidemiologic data supported intraoperative acquisition. M. abscessus was isolated from HCDs used on patients and molecular comparison with patient isolates demonstrated clonality. Smoke testing simulated aerosolization of M. abscessus from HCDs during device operation. Because the HCD fleet was replaced, no additional extrapulmonary HA-Mab infections due to the unique clone identified in this cluster have been detected.
Conclusions:
Despite adhering to HCD cleaning and disinfection strategies beyond manufacturer instructions for use, HCDs became colonized with and ultimately transmitted M. abscessus to 3 patients. Design modifications to better contain aerosols or filter exhaust during device operation are needed to prevent NTM transmission events from water-based HCDs.
Though diet quality is widely recognised as linked to risk of chronic disease, health systems have been challenged to find a user-friendly, efficient way to obtain information about diet. The Penn Healthy Diet (PHD) survey was designed to fill this void. The purposes of this pilot project were to assess the patient experience with the PHD, to validate the accuracy of the PHD against related items in a diet recall and to explore scoring algorithms with relationship to the Healthy Eating Index (HEI)-2015 computed from the recall data. A convenience sample of participants in the Penn Health BioBank was surveyed with the PHD, the Automated Self-Administered 24-hour recall (ASA24) and experience questions. Kappa scores and Spearman correlations were used to compare related questions in the PHD to the ASA24. Numerical scoring, regression tree and weighted regressions were computed for scoring. Participants assessed the PHD as easy to use and were willing to repeat the survey at least annually. The three scoring algorithms were strongly associated with HEI-2015 scores using National Health and Nutrition Examination Survey 2017–2018 data from which the PHD was developed and moderately associated with the pilot replication data. The PHD is acceptable to participants and at least moderately correlated with the HEI-2015. Further validation in a larger sample will enable the selection of the strongest scoring approach.
In sub-Saharan Africa, there are no validated screening tools for delirium in older adults, despite the known vulnerability of older people to delirium and the associated adverse outcomes. This study aimed to assess the effectiveness of a brief smartphone-based assessment of arousal and attention (DelApp) in the identification of delirium amongst older adults admitted to the medical department of a tertiary referral hospital in Northern Tanzania.
Method:
Consecutive admissions were screened using the DelApp during a larger study of delirium prevalence and risk factors. All participants subsequently underwent detailed clinical assessment for delirium by a research doctor. Delirium and dementia were identified against DSM-5 criteria by consensus.
Results:
Complete data for 66 individuals were collected of whom 15 (22.7%) had delirium, 24.5% had dementia without delirium, and 10.6% had delirium superimposed on dementia. Sensitivity and specificity of the DelApp for delirium were 0.87 and 0.62, respectively (AUROC 0.77) and 0.88 and 0.73 (AUROC 0.85) for major cognitive impairment (dementia and delirium combined). Lower DelApp score was associated with age, significant visual impairment (<6/60 acuity), illness severity, reduced arousal and DSM-5 delirium on univariable analysis, but on multivariable logistic regression only arousal remained significant.
Conclusion:
In this setting, the DelApp performed well in identifying delirium and major cognitive impairment but did not differentiate delirium and dementia. Performance is likely to have been affected by confounders including uncorrected visual impairment and reduced level of arousal without delirium. Negative predictive value was nevertheless high, indicating excellent ‘rule out’ value in this setting.
Prisons are susceptible to outbreaks. Control measures focusing on isolation and cohorting negatively affect wellbeing. We present an outbreak of coronavirus disease 2019 (COVID-19) in a large male prison in Wales, UK, October 2020 to April 2021, and discuss control measures.
We gathered case-information, including demographics, staff-residence postcode, resident cell number, work areas/dates, test results, staff interview dates/notes and resident prison-transfer dates. Epidemiological curves were mapped by prison location. Control measures included isolation (exclusion from work or cell-isolation), cohorting (new admissions and work-area groups), asymptomatic testing (case-finding), removal of communal dining and movement restrictions. Facemask use and enhanced hygiene were already in place. Whole-genome sequencing (WGS) and interviews determined the genetic relationship between cases plausibility of transmission.
Of 453 cases, 53% (n = 242) were staff, most aged 25–34 years (11.5% females, 27.15% males) and symptomatic (64%). Crude attack-rate was higher in staff (29%, 95% CI 26–64%) than in residents (12%, 95% CI 9–15%).
Whole-genome sequencing can help differentiate multiple introductions from person-to-person transmission in prisons. It should be introduced alongside asymptomatic testing as soon as possible to control prison outbreaks. Timely epidemiological investigation, including data visualisation, allowed dynamic risk assessment and proportionate control measures, minimising the reduction in resident welfare.
Wetland sediments are valuable archives of environmental change but can be challenging to date. Terrestrial macrofossils are often sparse, resulting in radiocarbon (14C) dating of less desirable organic fractions. An alternative approach for capturing changes in atmospheric 14C is the use of terrestrial microfossils. We 14C date pollen microfossils from two Australian wetland sediment sequences and compare these to ages from other sediment fractions (n = 56). For the Holocene Lake Werri Berri record, pollen 14C ages are consistent with 14C ages on bulk sediment and humic acids (n = 14), whilst Stable Polycyclic Aromatic Carbon (SPAC) 14C ages (n = 4) are significantly younger. For Welsby Lagoon, pollen concentrate 14C ages (n = 21) provide a stratigraphically coherent sequence back to 50 ka BP. 14C ages from humic acid and >100 µm fractions (n = 13) are inconsistent, and often substantially younger than pollen ages. Our comparison of Bayesian age-depth models, developed in Oxcal, Bacon and Undatable, highlight the strengths and weaknesses of the different programs for straightforward and more complex chrono-stratigraphic records. All models display broad similarities but differences in modeled age-uncertainty, particularly when age constraints are sparse. Intensive dating of wetland sequences improves the identification of outliers and generation of robust age models, regardless of program used.
There is evidence that the COVID-19 pandemic has negatively affected mental health, but most studies have been conducted in the general population.
Aims
To identify factors associated with mental health during the COVID-19 pandemic in individuals with pre-existing mental illness.
Method
Participants (N = 2869, 78% women, ages 18–94 years) from a UK cohort (the National Centre for Mental Health) with a history of mental illness completed a cross-sectional online survey in June to August 2020. Mental health assessments were the GAD-7 (anxiety), PHQ-9 (depression) and WHO-5 (well-being) questionnaires, and a self-report question on whether their mental health had changed during the pandemic. Regressions examined associations between mental health outcomes and hypothesised risk factors. Secondary analyses examined associations between specific mental health diagnoses and mental health.
Results
A total of 60% of participants reported that mental health had worsened during the pandemic. Younger age, difficulty accessing mental health services, low income, income affected by COVID-19, worry about COVID-19, reduced sleep and increased alcohol/drug use were associated with increased depression and anxiety symptoms and reduced well-being. Feeling socially supported by friends/family/services was associated with better mental health and well-being. Participants with a history of anxiety, depression, post-traumatic stress disorder or eating disorder were more likely to report that mental health had worsened during the pandemic than individuals without a history of these diagnoses.
Conclusions
We identified factors associated with worse mental health during the COVID-19 pandemic in individuals with pre-existing mental illness, in addition to specific groups potentially at elevated risk of poor mental health during the pandemic.
The mental health impact of the initial years of military service is an under-researched area. This study is the first to explore mental health trajectories and associated predictors in military members across the first 3–4 years of their career to provide evidence to inform early interventions.
Methods
This prospective cohort study surveyed Australian Defence personnel (n = 5329) at four time-points across their early military career. Core outcomes were psychological distress (K10+) and posttraumatic stress symptoms [four-item PTSD Checklist (PCL-4)] with intra-individual, organizational and event-related trajectory predictors. Latent class growth analyses (LCGAs) identified subgroups within the sample that followed similar longitudinal trajectories for these outcomes, while conditional LCGAs examined the variables that influenced patterns of mental health.
Results
Three clear trajectories emerged for psychological distress: resilient (84.0%), worsening (9.6%) and recovery (6.5%). Four trajectories emerged for post-traumatic stress, including resilient (82.5%), recovery (9.6%), worsening (5.8%) and chronic subthreshold (2.3%) trajectories. Across both outcomes, prior trauma exposure alongside modifiable factors, such as maladaptive coping styles, and increased anger and sleep difficulties were associated with the worsening and chronic subthreshold trajectories, whilst members in the resilient trajectories were more likely to be male, report increased social support from family/friends and Australian Defence Force (ADF) sources, and use adaptive coping styles.
Conclusions
The emergence of symptoms of mental health problems occurs early in the military lifecycle for a significant proportion of individuals. Modifiable factors associated with wellbeing identified in this study are ideal targets for intervention, and should be embedded and consolidated throughout the military career.
Relapse and recurrence of depression are common, contributing to the overall burden of depression globally. Accurate prediction of relapse or recurrence while patients are well would allow the identification of high-risk individuals and may effectively guide the allocation of interventions to prevent relapse and recurrence.
Aims
To review prognostic models developed to predict the risk of relapse, recurrence, sustained remission, or recovery in adults with remitted major depressive disorder.
Method
We searched the Cochrane Library (current issue); Ovid MEDLINE (1946 onwards); Ovid Embase (1980 onwards); Ovid PsycINFO (1806 onwards); and Web of Science (1900 onwards) up to May 2021. We included development and external validation studies of multivariable prognostic models. We assessed risk of bias of included studies using the Prediction model risk of bias assessment tool (PROBAST).
Results
We identified 12 eligible prognostic model studies (11 unique prognostic models): 8 model development-only studies, 3 model development and external validation studies and 1 external validation-only study. Multiple estimates of performance measures were not available and meta-analysis was therefore not necessary. Eleven out of the 12 included studies were assessed as being at high overall risk of bias and none examined clinical utility.
Conclusions
Due to high risk of bias of the included studies, poor predictive performance and limited external validation of the models identified, presently available clinical prediction models for relapse and recurrence of depression are not yet sufficiently developed for deploying in clinical settings. There is a need for improved prognosis research in this clinical area and future studies should conform to best practice methodological and reporting guidelines.
Psychosis is a major mental illness with first onset in young adults. The prognosis is poor in around half of the people affected, and difficult to predict. The few tools available to predict prognosis have major weaknesses which limit their use in clinical practice. We aimed to develop and validate a risk prediction model of symptom non-remission in first-episode psychosis.
Method
Our development cohort consisted of 1027 patients with first-episode psychosis recruited between 2005 to 2010 from 14 early intervention services across the National Health Service in England. Our validation cohort consisted of 399 patients with first-episode psychosis recruited between 2006 to 2009 from a further 11 English early intervention services. The one-year non-remission rate was 52% and 54% in the development and validation cohorts, respectively. Multivariable logistic regression was used to develop a risk prediction model for non-remission, which was externally validated.
Result
The prediction model showed good discrimination (C-statistic of 0.74 (0.72, 0.76) and adequate calibration with intercept alpha of 0.13 (0.03, 0.23) and slope beta of 0.99 (0.87, 1.12). Our model improved the net-benefit by 16% at a risk threshold of 50%, equivalent to 16 more detected non-remitted first-episode psychosis individuals per 100 without incorrectly classifying remitted cases.
Conclusion
Once prospectively validated, our first episode psychosis prediction model could help identify patients at increased risk of non-remission at initial clinical contact.
To determine whether age, gender and marital status are associated with prognosis for adults with depression who sought treatment in primary care.
Methods
Medline, Embase, PsycINFO and Cochrane Central were searched from inception to 1st December 2020 for randomised controlled trials (RCTs) of adults seeking treatment for depression from their general practitioners, that used the Revised Clinical Interview Schedule so that there was uniformity in the measurement of clinical prognostic factors, and that reported on age, gender and marital status. Individual participant data were gathered from all nine eligible RCTs (N = 4864). Two-stage random-effects meta-analyses were conducted to ascertain the independent association between: (i) age, (ii) gender and (iii) marital status, and depressive symptoms at 3–4, 6–8,<Vinod: Please carry out the deletion of serial commas throughout the article> and 9–12 months post-baseline and remission at 3–4 months. Risk of bias was evaluated using QUIPS and quality was assessed using GRADE. PROSPERO registration: CRD42019129512. Pre-registered protocol https://osf.io/e5zup/.
Results
There was no evidence of an association between age and prognosis before or after adjusting for depressive ‘disorder characteristics’ that are associated with prognosis (symptom severity, durations of depression and anxiety, comorbid panic disorderand a history of antidepressant treatment). Difference in mean depressive symptom score at 3–4 months post-baseline per-5-year increase in age = 0(95% CI: −0.02 to 0.02). There was no evidence for a difference in prognoses for men and women at 3–4 months or 9–12 months post-baseline, but men had worse prognoses at 6–8 months (percentage difference in depressive symptoms for men compared to women: 15.08% (95% CI: 4.82 to 26.35)). However, this was largely driven by a single study that contributed data at 6–8 months and not the other time points. Further, there was little evidence for an association after adjusting for depressive ‘disorder characteristics’ and employment status (12.23% (−1.69 to 28.12)). Participants that were either single (percentage difference in depressive symptoms for single participants: 9.25% (95% CI: 2.78 to 16.13) or no longer married (8.02% (95% CI: 1.31 to 15.18)) had worse prognoses than those that were married, even after adjusting for depressive ‘disorder characteristics’ and all available confounders.
Conclusion
Clinicians and researchers will continue to routinely record age and gender, but despite their importance for incidence and prevalence of depression, they appear to offer little information regarding prognosis. Patients that are single or no longer married may be expected to have slightly worse prognoses than those that are married. Ensuring this is recorded routinely alongside depressive ‘disorder characteristics’ in clinic may be important.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
Frailty prevalence is higher in low- and middle-income countries (LMICs) compared with high-income countries when measured by biomedical frailty models, the most widely used being the frailty phenotype. Frailty in older people is becoming of global public health interest as a means of promoting health in old age in LMICs. As yet, little work has been done to establish to what extent the concept of frailty, as conceived according to ‘western’ biomedicine, has cross-cultural resonance for a low-income rural African setting. This study aimed to investigate the meaning of frailty contextually, using the biomedical concept of the frailty phenotype as a framework. Qualitative interviews were conducted with a purposive sample of older adults, their care-givers and community representatives in rural northern Tanzania. Thirty interview transcripts were transcribed, translated from Kiswahili to English and thematically analysed. Results reveal that despite superficial similarities in the understanding of frailty, to a great extent the physical changes highlighted by the frailty phenotype were naturalised, except when these were felt to be due to a scarcity of resources. Frailty was conceptualised as less of a physical problem of the individual, but rather, as a social problem of the community, suggesting that the frailty construct may be usefully applied cross-culturally when taking a social equity focus to the health of older people in LMICs.