We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As the use of guided digitally-delivered cognitive-behavioral therapy (GdCBT) grows, pragmatic analytic tools are needed to evaluate coaches’ implementation fidelity.
Aims
We evaluated how natural language processing (NLP) and machine learning (ML) methods might automate the monitoring of coaches’ implementation fidelity to GdCBT delivered as part of a randomized controlled trial.
Method
Coaches served as guides to 6-month GdCBT with 3,381 assigned users with or at risk for anxiety, depression, or eating disorders. CBT-trained and supervised human coders used a rubric to rate the implementation fidelity of 13,529 coach-to-user messages. NLP methods abstracted data from text-based coach-to-user messages, and 11 ML models predicting coach implementation fidelity were evaluated.
Results
Inter-rater agreement by human coders was excellent (intra-class correlation coefficient = .980–.992). Coaches achieved behavioral targets at the start of the GdCBT and maintained strong fidelity throughout most subsequent messages. Coaches also avoided prohibited actions (e.g. reinforcing users’ avoidance). Sentiment analyses generally indicated a higher frequency of coach-delivered positive than negative sentiment words and predicted coach implementation fidelity with acceptable performance metrics (e.g. area under the receiver operating characteristic curve [AUC] = 74.48%). The final best-performing ML algorithms that included a more comprehensive set of NLP features performed well (e.g. AUC = 76.06%).
Conclusions
NLP and ML tools could help clinical supervisors automate monitoring of coaches’ implementation fidelity to GdCBT. These tools could maximize allocation of scarce resources by reducing the personnel time needed to measure fidelity, potentially freeing up more time for high-quality clinical care.
The recommended first-line treatment for insomnia is cognitive behavioral therapy for insomnia (CBTi), but access is limited. Telehealth- or internet-delivered CBTi are alternative ways to increase access. To date, these intervention modalities have never been compared within a single study. Further, few studies have examined a) predictors of response to the different modalities, b) whether successfully treating insomnia can result in improvement of health-related biomarkers, and c) mechanisms of change in CBTi. This protocol was designed to compare the three CBTi modalities to each other and a waitlist control for adults aged 50-65 years (N = 100). Participants are randomly assigned to one of four study arms: in-person- (n=30), telehealth- (n=30) internet-delivered (n=30) CBTi, or 12-week waitlist control (n=10). Outcomes include self-reported insomnia symptom severity, polysomnography, circadian rhythms of activity and core body temperature, blood- and sweat-based biomarkers, cognitive functioning, and magnetic resonance imaging.
Co-occurring self-harm and aggression (dual harm) is particularly prevalent among forensic mental health service (FMHS) patients. There is limited understanding of why this population engages in dual harm.
Aims
This work aims to explore FMHS patients’ experiences of dual harm and how they make sense of this behaviour, with a focus on the role of emotions.
Method
Participants were identified from their participation in a previous study. Sixteen FMHS patients with a lifetime history of dual harm were recruited from two hospitals. Individuals participated in one-to-one, semi-structured interviews where they reflected on past and/or current self-harm and aggression. Interview transcripts were analysed using reflexive thematic analysis.
Results
Six themes were generated: self-harm and aggression as emotional regulation strategies, the consequences of witnessing harmful behaviours, relationships with others and the self, trapped within the criminal justice system, the convergence and divergence of self-harm and aggression, and moving forward as an FMHS patient. Themes highlighted shared risk factors of dual harm across participants, including emotional dysregulation, perceived lack of social support and witnessing harmful behaviours. Participants underlined the duality of their self-harm and aggression, primarily utilising both to regulate negative emotions. These behaviours also fulfilled distinct purposes at times (e.g. self-harm as punishment, aggression as defence). The impact of contextual factors within FMHSs, including restrictive practices and institutionalisation, were emphasised.
Conclusions
Findings provide recommendations that can help address dual harm within forensic settings, including (a) transdiagnostic, individualised approaches that consider the duality of self-harm and aggression; and (b) cultural and organisational focus on recovery-centred practice.
Müller Ice Cap sits on Umingmat Nunaat (Axel Heiberg Island), Nunavut, Canada, ~ 80°N. Its high latitude and elevation suggest it experiences relatively little melt and preserves an undisturbed paleoclimate record. Here, we present a suite of field measurements, complemented by remote sensing, that constrain the ice thickness, accumulation rate, temperature, ice-flow velocity, and surface-elevation change of Müller Ice Cap. These measurements show that some areas near the top of the ice cap are more than 600 m thick, have nearly stable surface elevation, and flow slowly, making them good candidates for an ice core. The current mean annual surface temperature is −19.6 °C, which combined with modeling of the temperature profile indicates that the ice is frozen to the bed. Modeling of the depth-age scale indicates that Pleistocene ice is likely to exist with measurable resolution (300–1000 yr m−1) 20–90 m from the bed, assuming that Müller Ice Cap survived the Holocene Climatic Optimum with substantial ice thickness (~400 m or more). These conditions suggest that an undisturbed Holocene climate record could likely be recovered from Müller Ice Cap. We suggest 91.795°W, 79.874°N as the most promising drill site.
Globally, human rights violations experienced by persons with psychosocial, intellectual or cognitive disabilities continue to be a concern. The World Health Organization's (WHO) QualityRights initiative presents practical remedies to address these abuses. This paper presents an overview of the implementation of the initiative in Ghana.
Aims
The main objective of the QualityRights initiative in Ghana was to train and change attitudes among a wide range of stakeholders to promote recovery and respect for human rights for people with psychosocial, intellectual and cognitive disabilities.
Method
Reports of in-person and online training, minutes of meetings and correspondence among stakeholders of the QualityRights initiative in Ghana, including activities of international collaborators, were analysed to shed light on the implementation of the project in Ghana.
Results
In-person and online e-training on mental health were conducted. At the time of writing, 40 443 people had registered for the training, 25 416 had started the training and 20 865 people had completed the training and obtained a certificate. The team conducted 27 in-person training sessions with 910 people. The successful implementation of the project is underpinned by a committed partnership among stakeholders, strong leadership from the coordinating agency, the acceptance of the initiative and the outcome. A few challenges, both in implementation and acceptance, are discussed.
Conclusions
The exposure of the WHO QualityRights initiative to a substantial number of key stakeholders involved in mental healthcare in Ghana is critical to reducing human rights abuses for people with psychosocial, intellectual and cognitive disabilities.
The prevalence of food allergies in New Zealand infants is unknown; however, it is thought to be similar to Australia, where the prevalence is over 10% of 1-year-olds(1). Current New Zealand recommendations for reducing the risk of food allergies are to: offer all infants major food allergens (age appropriate texture) at the start of complementary feeding (around 6 months); ensure major allergens are given to all infants before 1 year; once a major allergen is tolerated, maintain tolerance by regularly (approximately twice a week) offering the allergen food; and continue breastfeeding while introducing complementary foods(2). To our knowledge, there is no research investigating whether parents follow these recommendations. Therefore, this study aimed to explore parental offering of major food allergens to infants during complementary feeding and parental-reported food allergies. The cross-sectional study included 625 parent-infant dyads from the multi-centred (Auckland and Dunedin) First Foods New Zealand study. Infants were 7-10 months of age and participants were recruited in 2020-2022. This secondary analysis included the use of a study questionnaire and 24-hour diet recall data. The questionnaire included determining whether the infant was currently breastfed, whether major food allergens were offered to the infant, whether parents intended to avoid any foods during the first year of life, whether the infant had any known food allergies, and if so, how they were diagnosed. For assessing consumers of major food allergens, 24-hour diet recall data was used (2 days per infant). The questionnaire was used to determine that all major food allergens were offered to 17% of infants aged 9-10 months. On the diet recall days, dairy (94.4%) and wheat (91.2%) were the most common major food allergens consumed. Breastfed infants (n = 414) were more likely to consume sesame than non-breastfed infants (n = 211) (48.8% vs 33.7%, p≤0.001). Overall, 12.6% of infants had a parental-reported food allergy, with egg allergy being the most common (45.6% of the parents who reported a food allergy). A symptomatic response after exposure was the most common diagnostic tool. In conclusion, only 17% of infants were offered all major food allergens by 9-10 months of age. More guidance may be required to ensure current recommendations are followed and that all major food allergens are introduced by 1 year of age. These results provide critical insight into parents’ current practices, which is essential in determining whether more targeted advice regarding allergy prevention and diagnosis is required.
Multimorbidity, the presence of two or more health conditions, has been identified as a possible risk factor for clinical dementia. It is unclear whether this is due to worsening brain health and underlying neuropathology, or other factors. In some cases, conditions may reflect the same disease process as dementia (e.g. Parkinson's disease, vascular disease), in others, conditions may reflect a prodromal stage of dementia (e.g. depression, anxiety and psychosis).
Aims
To assess whether multimorbidity in later life was associated with more severe dementia-related neuropathology at autopsy.
Method
We examined ante-mortem and autopsy data from 767 brain tissue donors from the UK, identifying physical multimorbidity in later life and specific brain-related conditions. We assessed associations between these purported risk factors and dementia-related neuropathological changes at autopsy (Alzheimer's-disease related neuropathology, Lewy body pathology, cerebrovascular disease and limbic-predominant age-related TDP-43 encephalopathy) with logistic models.
Results
Physical multimorbidity was not associated with greater dementia-related neuropathological changes. In the presence of physical multimorbidity, clinical dementia was less likely to be associated with Alzheimer's disease pathology. Conversely, conditions which may be clinical or prodromal manifestations of dementia-related neuropathology (Parkinson's disease, cerebrovascular disease, depression and other psychiatric conditions) were associated with dementia and neuropathological changes.
Conclusions
Physical multimorbidity alone is not associated with greater dementia-related neuropathological change; inappropriate inclusion of brain-related conditions in multimorbidity measures and misdiagnosis of neurodegenerative dementia may better explain increased rates of clinical dementia in multimorbidity
Therapeutics targeting frontotemporal dementia (FTD) are entering clinical trials. There are challenges to conducting these studies, including the relative rarity of the disease. Remote assessment tools could increase access to clinical research and pave the way for decentralized clinical trials. We developed the ALLFTD Mobile App, a smartphone application that includes assessments of cognition, speech/language, and motor functioning. The objectives were to determine the feasibility and acceptability of collecting remote smartphone data in a multicenter FTD research study and evaluate the reliability and validity of the smartphone cognitive and motor measures.
Participants and Methods:
A diagnostically mixed sample of 207 participants with FTD or from familial FTD kindreds (CDR®+NACC-FTLD=0 [n=91]; CDR®+NACC-FTLD=0.5 [n=39]; CDR®+NACC-FTLD>1 [n=39]; unknown [n=38]) were asked to remotely complete a battery of tests on their smartphones three times over two weeks. Measures included five executive functioning (EF) tests, an adaptive memory test, and participant experience surveys. A subset completed smartphone tests of balance at home (n=31) and a finger tapping test (FTT) in the clinic (n=11). We analyzed adherence (percentage of available measures that were completed) and user experience. We evaluated Spearman-Brown split-half reliability (100 iterations) using the first available assessment for each participant. We assessed test-retest reliability across all available assessments by estimating intraclass correlation coefficients (ICC). To investigate construct validity, we fit regression models testing the association of the smartphone measures with gold-standard neuropsychological outcomes (UDS3-EF composite [Staffaroni et al., 2021], CVLT3-Brief Form [CVLT3-BF] Immediate Recall, mechanical FTT), measures of disease severity (CDR®+NACC-FTLD Box Score & Progressive Supranuclear Palsy Rating Scale [PSPRS]), and regional gray matter volumes (cognitive tests only).
Results:
Participants completed 70% of tasks. Most reported that the instructions were understandable (93%), considered the time commitment acceptable (97%), and were willing to complete additional assessments (98%). Split-half reliability was excellent for the executive functioning (r’s=0.93-0.99) and good for the memory test (r=0.78). Test-retest reliabilities ranged from acceptable to excellent for cognitive tasks (ICC: 0.70-0.96) and were excellent for the balance (ICC=0.97) and good for FTT (ICC=0.89). Smartphone EF measures were strongly associated with the UDS3-EF composite (ß's=0.6-0.8, all p<.001), and the memory test was strongly correlated with total immediate recall on the CVLT3-BF (ß=0.7, p<.001). Smartphone FTT was associated with mechanical FTT (ß=0.9, p=.02), and greater acceleration on the balance test was associated with more motor features (ß=0.6, p=0.02). Worse performance on all cognitive tests was associated with greater disease severity (ß's=0.5-0.7, all p<.001). Poorer performance on the smartphone EF tasks was associated with smaller frontoparietal/subcortical volume (ß's=0.4-0.6, all p<.015) and worse memory scores with smaller hippocampal volume (ß=0.5, p<.001).
Conclusions:
These results suggest remote digital data collection of cognitive and motor functioning in FTD research is feasible and acceptable. These findings also support the reliability and validity of unsupervised ALLFTD Mobile App cognitive tests and provide preliminary support for the motor measures, although further study in larger samples is required.
Insomnia affects 30-45% of the world population, is related to mortality (i.e., auto accidents and job-related accidents), and is related to mood and affect disorders such as anxiety and depression. Better understanding of insomnia via increased research will decrease the burden on insomnia. The neurocognitive model of sleep proposes that conditioned somatic and cognitive hyperarousal develop in response to repeated pairings of sleep-related stimuli with insomnia-related wakefulness. The purpose of this study was to examine the neurocognitive model of sleep using a novel laboratory paradigm, the Sleep Approach Avoidance Task (SAAT). It was hypothesized that individuals who report symptoms of insomnia will display a bias for negative sleep-related images from the SAAT, which is presumably a reflection of cognitive, behavioral and physiological processes associated with hyperarousal. It was also hypothesized that participants who report poor sleep would provide different subjective ratings for negative images (i.e., stronger valence and arousal) than individuals who reported better sleep.
Participants and Methods:
An initial sample of 66 healthy college-aged participants completed the Insomnia Severity Index (ISI), the Pittsburgh Sleep Quality Index (PSQI) the Dysfunctional Attitudes and Beliefs about Sleep (DBAS) scale and the Epworth Sleepiness Scale (ESS). Participants also completed the SAAT. The SAAT was developed to assess sleep-related bias in adults. The SAAT is a visual, joystick controlled reaction time task that measures implicit bias for positive and negative sleep-related images. At the end of the task the participants are also asked to rate each image along three dimensions included valence, arousal and dominance.
Results:
There was a positive correlation between the SAAT and the ISI [r(61) = .30, p = .01], indicating that symptoms of insomnia are related to negative approach-related bias for sleep-related images. No other correlations were observed between the SAAT and self-report sleep measures. With regard to rating of images, higher dominance ratings for negative images were correlated with the SAAT [r(62) = .24, p = .03], which indicates that the approach bias for negative images is associated with “being in control.” Multiple linear regression was used to test if ISI scores and dominance ratings for negative images significantly predicted SAAT bias scores. The overall regression was statistically significant [r2 = .13, F(2, 58) = 4.15, p = .02]. ISI scores significantly predicted SAAT scores (ß = .27, p = .04), whereas dominance ratings for negative images did not significantly predict SAAT scores (ß = .20, p = .11). Exploratory correlational analyses were also completed for ratings of images and other sleep self-report measures. Valence ratings for positive sleep-related images were positively correlated with the ESS [r(64) = .36, p = .01], whereas valence ratings for negative sleep-related images were negatively correlated with the ESS [r(64) = -.24, p = .03].
Conclusions:
Hypotheses were partially supported with the ISI being the only self-report measure associated with negative bias for sleep-related images. While ratings of dominance are associated with bias for negative sleep-related images, these ratings do not provide unique variance. These findings indicate a cognitive processing bias for sleep-related stimuli among young adult poor sleepers. Limitations, implications for assessment and intervention are discussed.
There are many common beliefs within the general public about Chronic Traumatic Encephalopathy (CTE) that contradict research findings and scientific evidence. Therefore, the goal of this study was to examine the accuracy of CTE knowledge across three diverse samples.
Participants and Methods:
The three groups included in the sample were 333 college students (54%), 196 individuals from the public (32%), and 90 psychology trainees/clinicians (54%) for a total of 619 participants. Online surveys were used to collect the CTE knowledge accuracy (i.e., the number correct divided by the total number of questions) of the sample. The questions about CTE were adapted from Merz et al. (2017) and from the Sports Neuropsychology Society’s “CTE: A Q and A Fact Sheet.”
Results:
Overall, CTE knowledge accuracy was 52% (M = 51%, SD = .24). Regarding inaccurate beliefs, two-thirds of the sample believed that CTE was related to sports participation alone even if a head injury did not occur, and most participants believed that CTE could be caused by a single injury. Additionally, confidence in CTE knowledge was positively correlated with willingness to allow their child to play a high contact sport despite overall low CTE knowledge accuracy. Last, many participants reported education (67%) and health care providers (61%) as their main sources of CTE information while only 18% of participants cited television/movies. However, when asked to provide additional details about their CTE information source, many participants cited ESPN specials and the movie “Concussion” as the main reason they learned of the condition and sought out additional information.
Conclusions:
The results of this study are consistent with previous research on CTE knowledge accuracy. This further supports the need for clinicians and researchers to address misconceptions by providing information and scientific facts.
A range of health effects are associated with debt burdens from ubiquitous access to expensive credit. These health effects are concerning, especially for women who owe multiple types of higher-cost debt simultaneously and experience significantly higher stress associated with their debt burdens when compared to men. While debt burdens have been shown to contribute to poor mental and physical health, the potential gendered and racialized effects are poorly understood. We conducted interviews between January and April 2021 with twenty-nine racially marginalized women who reported owing debt, and used theoretical concepts of predatory inclusion and intersectionality to understand their experiences. Women held many types of debt, most commonly from student loans, medical bills, and credit cards. Women described debt as a violent, abusive, and inescapable relationship that exacted consequential tolls on their health. Despite these, women found ways to resist the violence of debt, to care for themselves and others, and to experience joy in their daily lives.
We show large flows of workers into the real estate agent (REA) occupation during the early 2000s from virtually all parts of the skill, wage, and education spectrums. We find those entering REA in Metropolitan Statistical Areas (MSAs) with house price bubbles end up in occupations paying significantly less in the long-run as compared to similar REA entrants in non-bubble areas. Even in 2017, when house prices and employment return to their pre-crisis levels, REA entrants in Bubble MSAs are in occupations earning about 6% less. These results point to lasting effects of labor allocation decisions in response to distorted price signals.
Adverse effects are a common concern when prescribing and reviewing medication, particularly in vulnerable adults such as older people and those with intellectual disability. This paper describes the development of an app giving information on side-effects, called Medichec, and provides a description of the processes involved in its development and how drugs were rated for each side-effect. Medications with central anticholinergic action, dizziness, drowsiness, hyponatraemia, QTc prolongation, bleeding and constipation were identified using the British National Formulary (BNF) and frequency of occurrence of these effects was determined using the BNF, product information and electronic searches, including PubMed.
Results
Medications were rated using a traffic light system according to how commonly the adverse effect was known to occur or the severity of the effect.
Clinical implications
Medichec can facilitate access to side-effects information for multiple medications, aid clinical decision-making, optimise treatment and improve patient safety in vulnerable adults.
Fossils from the deep-sea Ediacaran biotas of Newfoundland are among the oldest architecturally complex soft-bodied macroorganisms on Earth. Most organisms in the Mistaken Point–type biotas of Avalonia—particularly the fractal-branching frondose Rangeomorpha— have been traditionally interpreted as living erect within the water column during life. However, due to the scarcity of documented physical sedimentological proxies associated with fossiliferous beds, Ediacaran paleocurrents have been inferred in some instances from the preferential orientation of fronds. This calls into question the relationship between frond orientation and paleocurrents. In this study, we present an integrated approach from a newly described fossiliferous surface (the “Melrose Surface” in the Fermeuse Formation at Melrose, on the southern portion of the Catalina Dome in the Discovery UNESCO Global Geopark) combining: (1) physical sedimentological evidence for paleocurrent direction in the form of climbing ripple cross-lamination and (2) a series of statistical analyses based on modified polythetic and monothetic clustering techniques reflecting the circular nature of the recorded orientation of Fractofusus misrai specimens. This study demonstrates the reclining rheotropic mode of life of the Ediacaran rangeomorph taxon Fractofusus misrai and presents preliminary inferences suggesting a similar mode of life for Bradgatia sp. and Pectinifrons abyssalis based on qualitative evidence. These results advocate for the consideration of an alternative conceptual hypothesis for position of life of Ediacaran organisms in which they are interpreted as having lived reclined on the seafloor, in the position that they are preserved.
Antibiotics are essential medications for treating life-threatening infections. However, incorrect prescribing can lead to adverse events and contribute to antibiotic resistance. We sought to develop a utilization quality measure that could be used by health insurance plans to track overall prescribing for respiratory conditions.
Design:
A consensus-based process that included evidence review, testing, and stakeholder input was used to develop a measure and assess its usefulness for the Healthcare Effectiveness Data and Information Set (HEDIS), a national quality measurement tool.
Methods:
Guidelines and literature were reviewed to establish the rationale for the measure. The measure was tested in claims data for commercial, Medicaid and Medicare Advantage enrollees to assess feasibility of collecting and reporting needed information. The measure was vetted with multistakeholder advisory panels and posted for public comment to solicit wide input on relevance and usability.
Results:
Respiratory conditions are frequent reasons for outpatient care in the data assessed. On average, across all lines of business, the measure revealed that approximately one-third of outpatient visits for respiratory conditions are followed by antibiotics. Stakeholders supported the measure as a tool for monitoring antibiotic prescribing across health plans alongside existing measures that assess inappropriate prescribing for specific conditions. The final measure assesses the number of antibiotic prescriptions dispensed across all outpatient respiratory-related encounters at a health-plan level.
Conclusions:
The measure on antibiotic prescribing for respiratory conditions was relevant, feasible, and useful. Stakeholders strongly supported the newly developed measure and recommended its integration into HEDIS.
Post-traumatic growth (PTG) refers to beneficial psychological change following trauma.
Aims
This study explores the sociodemographic, health and deployment-related factors associated with PTG in serving/ex-serving UK armed forces personnel deployed to military operations in Iraq or Afghanistan.
Method
Multinomial logistic regression analyses were applied to retrospective questionnaire data collected 2014–2016, stratified by gender. PTG scores were split into tertiles of no/very low PTG, low PTG and moderate/large PTG.
Results
A total of 1447/4610 male personnel (30.8%) and 198/570 female personnel (34.8%) reported moderate/large PTG. Male personnel were more likely to report moderate/large PTG compared with no/very low PTG if they reported a greater belief of being in serious danger (relative risk ratio (RRR) 2.47, 95% CI 1.68–3.64), were a reservist (RRR 2.37, 95% CI 1.80–3.11), reported good/excellent general health (fair/poor general health: RRR 0.33, 95% CI 0.24–0.46), a greater number of combat experiences, less alcohol use, better mental health, were of lower rank or were younger. Female personnel were more likely to report moderate/large PTG if they were single (in a relationship: RRR 0.40, 95% CI 0.22–0.74), had left military service (RRR 2.34, 95% CI 1.31–4.17), reported better mental health (common mental disorder: RRR 0.37, 95% CI 0.17–0.84), were a reservist, reported a greater number of combat experiences or were younger. Post-traumatic stress disorder had a curvilinear relationship with PTG.
Conclusions
A moderate/large degree of PTG among the UK armed forces is associated with mostly positive health experiences, except for post-traumatic stress disorder.
Healthcare workers (HCWs) have faced considerable pressures during the COVID-19 pandemic. For some, this has resulted in mental health distress and disorder. Although interventions have sought to support HCWs, few have been evaluated.
Aims
We aimed to determine the effectiveness of the ‘Foundations’ application (app) on general (non-psychotic) psychiatric morbidity.
Method
We conducted a multicentre randomised controlled trial of HCWs at 16 NHS trusts (trial registration number: EudraCT: 2021-001279-18). Participants were randomly assigned to the app or wait-list control group. Measures were assessed at baseline, after 4 and 8 weeks. The primary outcome was general psychiatric morbidity (using the General Health Questionnaire). Secondary outcomes included: well-being; presenteeism; anxiety; depression and insomnia. The primary analysis used mixed-effects multivariable regression, presented as adjusted mean differences (aMD).
Results
Between 22 March and 3 June 2021, 1002 participants were randomised (500:502), and 894 (89.2%) followed-up. The sample was predominately women (754/894, 84.3%), with a mean age of 44⋅3 years (interquartile range (IQR) 34–53). Participants randomised to the app had a reduction in psychiatric morbidity symptoms (aMD = −1.39, 95% CI −2.05 to −0.74), improvement in well-being (aMD = 0⋅54, 95% CI 0⋅20 to 0⋅89) and reduction in insomnia (adjusted odds ratio (aOR) = 0⋅36, 95% CI 0⋅21 to 0⋅60). No other significant findings were found, or adverse events reported.
Conclusions
The app had an effect in reducing psychiatric morbidity symptoms in a sample of HCWs. Given it is scalable with no adverse effects, the app may be used as part of an organisation's tiered staff support package. Further evidence is needed on long-term effectiveness and cost-effectiveness.
Prediction of treatment outcomes is a key step in improving the treatment of major depressive disorder (MDD). The Canadian Biomarker Integration Network in Depression (CAN-BIND) aims to predict antidepressant treatment outcomes through analyses of clinical assessment, neuroimaging, and blood biomarkers.
Methods
In the CAN-BIND-1 dataset of 192 adults with MDD and outcomes of treatment with escitalopram, we applied machine learning models in a nested cross-validation framework. Across 210 analyses, we examined combinations of predictive variables from three modalities, measured at baseline and after 2 weeks of treatment, and five machine learning methods with and without feature selection. To optimize the predictors-to-observations ratio, we followed a tiered approach with 134 and 1152 variables in tier 1 and tier 2 respectively.
Results
A combination of baseline tier 1 clinical, neuroimaging, and molecular variables predicted response with a mean balanced accuracy of 0.57 (best model mean 0.62) compared to 0.54 (best model mean 0.61) in single modality models. Adding week 2 predictors improved the prediction of response to a mean balanced accuracy of 0.59 (best model mean 0.66). Adding tier 2 features did not improve prediction.
Conclusions
A combination of clinical, neuroimaging, and molecular data improves the prediction of treatment outcomes over single modality measurement. The addition of measurements from the early stages of treatment adds precision. Present results are limited by lack of external validation. To achieve clinically meaningful prediction, the multimodal measurement should be scaled up to larger samples and the robustness of prediction tested in an external validation dataset.
Edited by
James Law, University of Newcastle upon Tyne,Sheena Reilly, Griffith University, Queensland,Cristina McKean, University of Newcastle upon Tyne
Language is one of the most remarkable developmental accomplishments of childhood and a tool for life. Over the course of childhood and adolescence, language and literacy develop in dynamic complementarity, shaped by children’s developmental circumstances. Children’s developmental circumstances include characteristics of the child, their parents, family, communities and schools, and the social and cultural contexts in which they grow up. This chapter uses data collected in Growing up in Australia: The Longitudinal Study of Australian Children (LSAC) that was linked to Australia’s National Assessment of Literacy and Numeracy (NAPLAN) to quantify the effects of multiple risk factors on children’s language and literacy development. Latent class analysis and growth curve modelling are used to identify children’s developmental circumstances (i.e. risk profiles) and quantify the effects of different clusters of risk factors on children’s receptive vocabulary growth and reading achievement from age 4 to 15. The developmental circumstances that gave rise to stark inequalities in language and literacy comprise distinct clustering of sociodemographic, cognitive and non-cognitive risk factors. The results point to the need for cross-cutting social, health and education policies and coordinated multi-agency interventions efforts to address social determinants and break the cycle of developmental disadvantage.
Traditionally, primate cognition research has been conducted by independent teams on small populations of a few species. Such limited variation and small sample sizes pose problems that prevent us from reconstructing the evolutionary history of primate cognition. In this chapter, we discuss how large-scale collaboration, a research model successfully implemented in other fields, makes it possible to obtain the large and diverse datasets needed to conduct robust comparative analysis of primate cognitive abilities. We discuss the advantages and challenges of large-scale collaborations and argue for the need for more open science practices in the field. We describe these collaborative projects in psychology and primatology and introduce ManyPrimates as the first, successful collaboration that has established an infrastructure for large-scale, inclusive research in primate cognition. Considering examples of large-scale collaborations both in primatology and psychology, we conclude that this type of research model is feasible and has the potential to address otherwise unattainable questions in primate cognition.