We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A subset of Australia’s workforce are shift workers undertaking critical work for our community, who are at greater risk for obesity and related conditions, such as type 2 diabetes and cardiovascular disease(1,2,3). The lifestyle and circadian disruption experienced by night shift workers is currently not addressed in existing dietary guidance for obesity management. The Shifting Weight using Intermittent Fasting in night shift workers study (SWIFt) is a world-first, randomised controlled trial that compares three, 24-week weight-loss interventions for night shift workers: continuous energy restriction (CER) and two twice-per-week intermittent fasting (IF) interventions (fasting during a night shift or during the day). This qualitative study aimed to explore the experiences of participants while following the dietary interventions to understand how intervention features and associated behaviour change mechanisms influence engagement. Semi-structured interviews at baseline and 24-weeks were conducted and audio diaries were collected every two weeks from participants using a maximum variation sampling approach, and analysed using the five steps of framework analysis(4). Each coded text for intervention enablers was mapped to the following behaviour change frameworks: the COM-B model, the Theoretical Domains Framework (TDF), and the Behaviour Change Taxonomy (BCT). Of the 250 participants randomised to the SWIFt study, 47 interviews from n = 33 participants were conducted and n = 18 participants completed audio diaries. Three major themes were identified related to intervention factors influencing engagement: 1) Simplicity and ease are important for night shift workers; 2) Support and accountability are needed to change behaviour and to tackle fluctuating motivation; and 3) An individualised approach is sometimes needed. Ten enabler sub-themes were identified: ease and acceptability of provided foods, structured and straightforward approach, flexible approach, easier with time, simplicity and small changes, dietetic support, accountability, self-monitoring, increased nutrition knowledge, and focus on regular eating. The enabler sub-themes were predominantly related to the ‘motivation’ and ‘capability’ domains of the COM-B model and one sub-theme related to the ‘opportunity’ domain. Mapping to the ‘capability’ COM-B domain was more frequent for the CER intervention compared to the IF interventions. For the Theoretical Domains Framework (TDF), the following domains were the most frequently reported: ‘behavioural regulation’, ‘knowledge’, ‘goals’ and ‘environmental context and resources’. For the Behaviour Change Taxonomy (BCT), the following domains were the most frequently reported: ‘instruction on how to perform a behaviour’, ‘goal setting (behaviour)’, ‘self-monitoring of outcome(s) of behaviour’, and ‘adding objects to the environment’. This study provides important findings detailing the behaviour change mechanisms perceived to positively influence night shift worker engagement during the weight-loss interventions of the SWIFt study, which will help inform the translation of interventions into non-research settings.
Approximately 15% of Australia’s workforce are shift workers, who are at greater risk for obesity and related conditions, such as type 2 diabetes and cardiovascular disease.(1,2,3) While current guidelines for obesity management prioritise diet-induced weight loss as a treatment option, there are limited weight-loss studies involving night shift workers and no current exploration of the factors associated with engagement in weight-loss interventions. The Shifting Weight using Intermittent Fasting in night shift workers (SWIFt) study was a randomised controlled trial that compared three, 24-week weight-loss interventions: continuous energy restriction (CER), and 500-calorie intermittent fasting (IF) for 2-days per week; either during the day (IF:2D), or the night shift (IF:2N). This current study provided a convergent, mixed methods, experimental design to: 1) explore the relationship between participant characteristics, dietary intervention group and time to drop out for the SWIFt study (quantitative); and 2) understand why some participants are more likely to drop out of the intervention (qualitative). Participant characteristics included age, gender, ethnicity, occupation, shift schedule, number of night shifts per four weeks, number of years in shift work, weight at baseline, weight change at four weeks, and quality of life at baseline. A Cox regression model was used to specify time to drop out from the intervention as the dependent variable and purposive selection was used to determine predictors for the model. Semi-structured interviews at baseline and 24-weeks were conducted and audio diaries every two weeks were collected from participants using a maximum variation sampling approach, and analysed using the five steps of framework analysis.(4) A total of 250 participants were randomised to the study between October 2019 and February 2022. Two participants were excluded from analysis due to retrospective ineligibility. Twenty-nine percent (n = 71) of participants dropped out of the study over the 24-week intervention. Greater weight at baseline, fewer years working shift work, lower weight change at four weeks, and women compared to men were associated with a significant increased rate of drop out from the study (p < 0.05). Forty-seven interviews from 33 participants were conducted and 18 participants completed audio diaries. Lack of time, fatigue and emotional eating were barriers more frequently reported by women. Participants with a higher weight at baseline more frequently reported fatigue and emotional eating barriers, and limited guidance on non-fasting days as a barrier for the IF interventions. This study provides important considerations for refining shift-worker weight-loss interventions for future implementation in order to increase engagement and mitigate the adverse health risks experienced by this essential workforce.
It remains unclear which individuals with subthreshold depression benefit most from psychological intervention, and what long-term effects this has on symptom deterioration, response and remission.
Aims
To synthesise psychological intervention benefits in adults with subthreshold depression up to 2 years, and explore participant-level effect-modifiers.
Method
Randomised trials comparing psychological intervention with inactive control were identified via systematic search. Authors were contacted to obtain individual participant data (IPD), analysed using Bayesian one-stage meta-analysis. Treatment–covariate interactions were added to examine moderators. Hierarchical-additive models were used to explore treatment benefits conditional on baseline Patient Health Questionnaire 9 (PHQ-9) values.
Results
IPD of 10 671 individuals (50 studies) could be included. We found significant effects on depressive symptom severity up to 12 months (standardised mean-difference [s.m.d.] = −0.48 to −0.27). Effects could not be ascertained up to 24 months (s.m.d. = −0.18). Similar findings emerged for 50% symptom reduction (relative risk = 1.27–2.79), reliable improvement (relative risk = 1.38–3.17), deterioration (relative risk = 0.67–0.54) and close-to-symptom-free status (relative risk = 1.41–2.80). Among participant-level moderators, only initial depression and anxiety severity were highly credible (P > 0.99). Predicted treatment benefits decreased with lower symptom severity but remained minimally important even for very mild symptoms (s.m.d. = −0.33 for PHQ-9 = 5).
Conclusions
Psychological intervention reduces the symptom burden in individuals with subthreshold depression up to 1 year, and protects against symptom deterioration. Benefits up to 2 years are less certain. We find strong support for intervention in subthreshold depression, particularly with PHQ-9 scores ≥ 10. For very mild symptoms, scalable treatments could be an attractive option.
In vitro evidence of antidepressant-driven antibiotic resistance has recently been described. In this retrospective cohort study, significant associations are identified between antidepressant use and antibiotic resistance on urine cultures taken in the Emergency Department. This epidemiologic data supports previous in vitro work and raises additional questions for further study.
Developing integrated mental health services focused on the needs of children and young people is a key policy goal in England. The THRIVE Framework and its implementation programme, i-THRIVE, are widely used in England. This study examines experiences of staff using i-THRIVE, estimates its effectiveness, and assesses how local system working relationships influence programme success.
Methods
This evaluation uses a quasi-experimental design (10 implementation and 10 comparison sites.) Measurements included staff surveys and assessment of ‘THRIVE-like’ features of each site. Additional site-level characteristics were collected from health system reports. The effect of i-THRIVE was evaluated using a four-group propensity-score-weighted difference-in-differences model; the moderating effect of system working relationships was evaluated with a difference-in-difference-in-differences model.
Results
Implementation site staff were more likely to report using THRIVE and more knowledgeable of THRIVE principles than comparison site staff. The mean improvement of fidelity scores among i-THRIVE sites was 16.7, and 8.8 among comparison sites; the weighted model did not find a statistically significant difference. However, results show that strong working relationships in the local system significantly enhance the effectiveness of i-THRIVE. Sites with highly effective working relationships showed a notable improvement in ‘THRIVE-like’ features, with an average increase of 16.41 points (95% confidence interval: 1.69–31.13, P-value: 0.031) over comparison sites. Sites with ineffective working relationships did not benefit from i-THRIVE (−2.76, 95% confidence interval: − 18.25–12.73, P-value: 0.708).
Conclusions
The findings underscore the importance of working relationship effectiveness in the successful adoption and implementation of multi-agency health policies like i-THRIVE.
Precision or “Personalized Medicine” and “Big Data” are growing trends in the biomedical research community and highlight an increased focus on access to larger datasets to effectively explore disease processes at the molecular level versus the previously common one-size-fits all approach. This focus necessitated a local transition from independent lab and siloed projects to a single software application utilizing a common ontology to create access to data from multiple repositories. Use of a common system has allowed for increased ease of collaboration and access to quality biospecimens that are extensively annotated with clinical, molecular, and patient associated data. The software needed to function at an enterprise level while continuing to allow investigators the autonomy and security access they desire. To identify a solution, a working group comprised of representation from independent repositories and areas of research focus across departments was established and responsible for review and implementation of an enterprise-wide biospecimen management system. Central to this process was the creation of a unified vocabulary across all repositories, including consensus around source of truth, standardized field definitions, and shared terminology.
The recent expansion of cross-cultural research in the social sciences has led to increased discourse on methodological issues involved when studying culturally diverse populations. However, discussions have largely overlooked the challenges of construct validity – ensuring instruments are measuring what they are intended to – in diverse cultural contexts, particularly in developmental research. We contend that cross-cultural developmental research poses distinct problems for ensuring high construct validity owing to the nuances of working with children, and that the standard approach of transporting protocols designed and validated in one population to another risks low construct validity. Drawing upon our own and others’ work, we highlight several challenges to construct validity in the field of cross-cultural developmental research, including (1) lack of cultural and contextual knowledge, (2) dissociating developmental and cultural theory and methods, (3) lack of causal frameworks, (4) superficial and short-term partnerships and collaborations, and (5) culturally inappropriate tools and tests. We provide guidelines for addressing these challenges, including (1) using ethnographic and observational approaches, (2) developing evidence-based causal frameworks, (3) conducting community-engaged and collaborative research, and (4) the application of culture-specific refinements and training. We discuss the need to balance methodological consistency with culture-specific refinements to improve construct validity in cross-cultural developmental research.
Accountable care models for Medicaid reimbursement aim to improve care quality and reduce costs by linking payments to performance. Oregon’s coordinated care organizations (CCOs) assume financial responsibility for their members and are incentivized to help clinics improve performance on specific quality metrics. This study explores how Oregon’s CCO model influences partnerships between payers and primary care clinics, focusing on strategies used to enhance screening and treatment for unhealthy alcohol use (UAU).
Methods:
In this qualitative study, we conducted semi-structured interviews with informants from 12 of 13 Oregon CCOs active in 2019 and 2020. The interviews focused on payer–provider partnerships, specifically around UAU screening and treatment, which is a long-standing CCO metric. We used thematic analysis to identify key themes and causal-loop diagramming to uncover feedback dynamics and communicate key findings. Meadows’ leverage point framework was applied to categorize findings based on their potential to drive change.
Results:
CCO strategies to support clinics included building relationships, reporting on metric progress, providing EHR technical assistance, offering training, and implementing alternative payment methods. CCOs prioritized clinics with more members and those highly motivated. Our analysis showed that while the CCO model aligned goals between payers and clinics, it may perpetuate rural disparities by prioritizing larger, better-resourced clinics.
Conclusions:
Oregon’s CCO model fosters partnerships centered on quality metrics but may unintentionally reinforce rural disparities by incentivizing support for larger clinics. Applying the Meadows framework highlighted leverage points within these partnerships.
A period of the life course where optimal nutrition and food security are crucial for the life-long health and wellbeing of women/birthing parents and infants is preconception, pregnancy, and infancy.(1) It is estimated that nearly one in every four households with pre-school children (0-4 years) experience food insecurity (FI) in the UK.(2) Yet, we lack an evidence-base exploring experiences of FI in this life course stage.(3,4) This study aimed to explore women’s experiences of food insecurity during and after pregnancy, including its influence on infant feeding decisions.
This study was ethically approved (Ref No: LRS/DP-23/24-39437) and pre-registered on OSF Registries (https://osf.io/9hn6r). Semi-structured mixed format individual interviews were conducted between November 2023 and February 2024. Pregnant individuals, those who had given birth ≤12 months ago, ≥18 years old, food insecure, residing in South London and with recourse to public funds were recruited through purposive sampling. The topic guide was informed by FI, pregnancy and postpartum related literature and piloted (n = 2). Interviews were audiorecorded and professionally transcribed. Demographic data was summarised using SPSS. Inductive thematic analysis was used to analyse the data and was completed using NVivo.
Eleven food insecure participants (2 pregnant, 9 new mothers; 2 White European, 9 Black African/Caribbean/British women) participated in the study. Six women were 0-6 months postpartum, and 3 women were between 6-12 months postpartum. The preliminary findings are represented by three themes: 1) A dichotomy: knowing vs affording, 2) Adaptive food coping strategies, and 3) Infant feeding practices. Participants shared detailed accounts of valuing a healthy diet and adapting food practices, yet they still were unable to meet their dietary needs and desires during and after pregnancy. Participants described worry around breastmilk supply; quality and quantity. Complimentary feeding was also identified as a source of worry. “She is still breastfeeding fully. I don’t want to change to milk, which maybe, sometimes, I might not be able to afford it…I won’t stop until she is 1.”Whilst the cost of formula feeding was a driver of a more severe experience of FI.
Policy and practice recommendations include enhancing local breastfeeding support to address FI specific concerns around breastmilk supply and at national level, advocating for greater support for adequate healthy food provision and for a price cap on infant formula. Future interventions must support maternal mental health given the high cognitive stress identified with living with FI during and after pregnancy. Further high-quality research is needed 1) amongst asylum seekers and refugees and non-English speakers who may also experience FI, and 2) exploring cultural influences on breastfeeding and the relationship with FI.
To perform a scoping review identifying the criteria for the deployment of the United States National Guard (USNG) to domestic sudden-onset natural disasters to identify the body of literature on which further research and policy decisions may be based.
Methods
On January 23, 2023 authors performed a search to identify texts relevant to the involvement of the USNG response to sudden-onset domestic natural disasters. English language texts from any year were considered. Independent reviewers screened titles and abstracts, then full-texts, then extracted data from included texts.
Results
From 886 search results, 34 texts were included. Fifteen criteria for USNG deployment were identified. Lack of security, power failure, and logistical coordination were the most common criteria. Hurricanes were the most common disaster type in the included results.
Conclusions
Disaster response coordinators may use these results to develop policies optimizing the use of the USNG in disaster response.
The adipofascial anterolateral thigh (AF-ALT) free flap represents a versatile technique in head and neck reconstructions, with its applications increasingly broadening. The objective was to detail the novel utilization of the AF-ALT flap in orbital and skull base reconstruction, along with salvage laryngectomy onlay in our case series.
Method
We conducted a retrospective analysis at Roswell Park Comprehensive Cancer Center, spanning from July 2019 to June 2023, focusing on patient demographics and reconstructive parameters data.
Results
The AF-ALT flap was successfully employed in eight patients (average age 59, body mass index [BMI] 32.0) to repair various defects. Noteworthy outcomes were observed in skull base reconstructions, with no flap failures or major complications over an average 12-month follow-up. Donor sites typically healed well with minimal interventions.
Conclusion
Our series is the first to report the AF-ALT flap's efficacy in anterior skull base and orbital reconstructions, demonstrating an additional innovation in complex head and neck surgeries.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
This study assesses the feasibility of biomedical informatics resources for efficient recruitment of rural residents with cancer to a clinical trial of a quality-of-life (QOL) mobile app. These resources have the potential to reduce costly, time-consuming, in-person recruitment methods.
Methods:
A cohort was identified from the electronic health record data repository and cross-referenced with patients who consented to additional research contact. Rural–urban commuting area codes were computed to identify rurality. Potential participants were emailed study details, screening questions, and an e-consent link via REDCap. Consented individuals received baseline questionnaires automatically. A sample minimum of n = 80 [n = 40 care as usual (CAU) n = 40 mobile app intervention] was needed.
Results:
N = 1298 potential participants (n = 365 CAU; n = 833 intervention) were screened for eligibility. For CAU, 68 consented, 67 completed baseline questionnaires, and 54 completed follow-up questionnaires. For intervention, 100 consented, 97 completed baseline questionnaires, and 58 completed follow-up questionnaires. The CAU/intervention reached 82.5%/122.5% of the enrollment target within 2 days. Recruitment and retention rates were 15.3% and 57.5%, respectively. The mean age was 59.5 ± 13.5 years. The sample was 65% women, 20% racial/ethnic minority, and 35% resided in rural areas.
Conclusion:
These results demonstrate that biomedical informatics resources can be highly effective in recruiting for cancer QOL research. Precisely identifying individuals likely to meet inclusion criteria who previously indicated interest in research participation expedited recruitment. Participants completed the consent and baseline questionnaires with zero follow-up contacts from the research team. This low-touch, repeatable process may be highly effective for multisite clinical trials research seeking to include rural residents.
Understanding characteristics of healthcare personnel (HCP) with SARS-CoV-2 infection supports the development and prioritization of interventions to protect this important workforce. We report detailed characteristics of HCP who tested positive for SARS-CoV-2 from April 20, 2020 through December 31, 2021.
Methods:
CDC collaborated with Emerging Infections Program sites in 10 states to interview HCP with SARS-CoV-2 infection (case-HCP) about their demographics, underlying medical conditions, healthcare roles, exposures, personal protective equipment (PPE) use, and COVID-19 vaccination status. We grouped case-HCP by healthcare role. To describe residential social vulnerability, we merged geocoded HCP residential addresses with CDC/ATSDR Social Vulnerability Index (SVI) values at the census tract level. We defined highest and lowest SVI quartiles as high and low social vulnerability, respectively.
Results:
Our analysis included 7,531 case-HCP. Most case-HCP with roles as certified nursing assistant (CNA) (444, 61.3%), medical assistant (252, 65.3%), or home healthcare worker (HHW) (225, 59.5%) reported their race and ethnicity as either non-Hispanic Black or Hispanic. More than one third of HHWs (166, 45.2%), CNAs (283, 41.7%), and medical assistants (138, 37.9%) reported a residential address in the high social vulnerability category. The proportion of case-HCP who reported using recommended PPE at all times when caring for patients with COVID-19 was lowest among HHWs compared with other roles.
Conclusions:
To mitigate SARS-CoV-2 infection risk in healthcare settings, infection prevention, and control interventions should be specific to HCP roles and educational backgrounds. Additional interventions are needed to address high social vulnerability among HHWs, CNAs, and medical assistants.
This article examines the development, early operation and subsequent failure of the Tot-Kolowa Red Cross irrigation scheme in Kenya’s Kerio Valley. Initially conceived as a technical solution to address regional food insecurity, the scheme aimed to scale up food production through the implementation of a fixed pipe irrigation system and the provision of agricultural inputs for cash cropping. A series of unfolding circumstances, however, necessitated numerous modifications to the original design as the project became increasingly entangled with deep and complex histories of land use patterns, resource allocation and conflict. Failure to understand the complexity of these dynamics ultimately led to the project’s collapse as the region spiralled into a period of significant unrest. In tracing these events, we aim to foreground the lived realities of imposed development, including both positive and negative responses to the scheme’s participatory obligations and its wider impact on community resilience.
The personalised oncology paradigm remains challenging to deliver despite technological advances in genomics-based identification of actionable variants combined with the increasing focus of drug development on these specific targets. To ensure we continue to build concerted momentum to improve outcomes across all cancer types, financial, technological and operational barriers need to be addressed. For example, complete integration and certification of the ‘molecular tumour board’ into ‘standard of care’ ensures a unified clinical decision pathway that both counteracts fragmentation and is the cornerstone of evidence-based delivery inside and outside of a research setting. Generally, integrated delivery has been restricted to specific (common) cancer types either within major cancer centres or small regional networks. Here, we focus on solutions in real-world integration of genomics, pathology, surgery, oncological treatments, data from clinical source systems and analysis of whole-body imaging as digital data that can facilitate cost-effectiveness analysis, clinical trial recruitment, and outcome assessment. This urgent imperative for cancer also extends across the early diagnosis and adjuvant treatment interventions, individualised cancer vaccines, immune cell therapies, personalised synthetic lethal therapeutics and cancer screening and prevention. Oncology care systems worldwide require proactive step-changes in solutions that include inter-operative digital working that can solve patient centred challenges to ensure inclusive, quality, sustainable, fair and cost-effective adoption and efficient delivery. Here we highlight workforce, technical, clinical, regulatory and economic challenges that prevent the implementation of precision oncology at scale, and offer a systematic roadmap of integrated solutions for standard of care based on minimal essential digital tools. These include unified decision support tools, quality control, data flows within an ethical and legal data framework, training and certification, monitoring and feedback. Bridging the technical, operational, regulatory and economic gaps demands the joint actions from public and industry stakeholders across national and global boundaries.