We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Dog-assisted interventions (DAIs) to improve health-related outcomes for people with mental health or neurodevelopmental conditions are becoming increasingly popular. However, DAIs are not based on robust scientific evidence.
Aims
To determine the effectiveness of DAIs for children and adults with mental health or neurodevelopmental conditions, assess how well randomised controlled trials (RCTs) are reported, and examine the use of terminology to classify DAIs.
Methods
A systematic search was conducted in Embase, PsycINFO, PubMed, CINAHL, Web of Science and the Cochrane Library. RCTs were grouped by commonly reported outcomes and described narratively with forest plots reporting standardised mean differences and 95% confidence intervals without a pooled estimate. The quality of reporting of RCTs and DAIs was evaluated by assessing adherence to CONSORT and the Template for Intervention Description and Replication (TIDieR) guidelines. Suitability of use of terminology was assessed by mapping terms to the intervention content described.
Results
Thirty-three papers were included, reporting 29 RCTs (with five assessed as overall high quality); a positive impact of DAIs was found by 57% (8/14) for social skills and/or behaviour, 50% (5/10) for symptom frequency and/or severity, 43% (6/14) for depression and 33% (2/6) for agitation. The mean proportion of adherence to the CONSORT statement was 48.6%. The TIDieR checklist also indicated considerable variability in intervention reporting. Most DAIs were assessed as having clear alignment for terminology, but improvement in reporting information is still required.
Conclusions
DAIs may show promise for improving mental health and behavioural outcomes for those with mental health or neurodevelopmental conditions, particularly for conditions requiring social skill support. However, the quality of reporting requires improvement.
Creating a sustainable residency research program is necessary to develop a sustainable research pipeline, as highlighted by the recent Society for Academic Emergency Medicine 2024 Consensus Conference. We sought to describe the implementation of a novel, immersive research program for first-year emergency medicine residents. We describe the curriculum development, rationale, implementation process, and lessons learned from the implementation of a year-long research curriculum for first-year residents. We further evaluated resident perception of confidence in research methodology, interest in research, and the importance of their research experience through a 32-item survey. In two cohorts, 25 first-year residents completed the program. All residents met their scholarly project requirements by the end of their first year. Two conference abstracts and one peer-reviewed publication were accepted for publication, and one is currently under review. Survey responses indicated that there was an increase in residents’ perceived confidence in research methodology, but this was limited by the small sample size. In summary, this novel resident research curriculum demonstrated a standardized, reproducible, and sustainable approach to provide residents with an immersive research program.
Family members of people experiencing a first-episode psychosis (FEP) can experience high levels of carer burden, stigma, emotional challenges, and uncertainty. This indicates the need for support and psychoeducation. To address these needs during the COVID-19 pandemic, we developed a multidisciplinary, blended, telehealth intervention, incorporating psychoeducation and peer support, for family members of FEP service users: PERCEPTION (PsychoEducation for Relatives of people Currently Experiencing Psychosis using Telehealth, an In-person meeting, and ONline peer support). The aim of the study was to explore the acceptability of PERCEPTION for family members of people who have experienced an FEP.
Methods:
Ten semi-structured interviews were conducted online via Zoom and audio recorded. Maximum variation sampling was used to recruit a sample balanced across age, gender, relatives’ prior mental health service use experience, and participants’ relationship with the family member experiencing psychosis. Data were analysed by hand using reflexive thematic analysis.
Results:
Four themes were produced: ‘Developing confidence in understanding and responding to psychosis’; ‘Navigating the small challenges of a broadly acceptable and desirable intervention’; ‘Timely support enriches the intervention’s meaning’; and ‘Dealing with the realities of carer burden’.
Conclusions:
Broadly speaking, PERCEPTION was experienced as acceptable, with the convenient, safe, and supportive environment, and challenges in engagement being highlighted by participants. Data point to a gap in service provision for long-term self-care support for relatives to reduce carer burden. Providing both in-person and online interventions, depending on individuals’ preference and needs, may help remove barriers for family members accessing help.
The project aims to improve carers’ engagement for patients admitted to our male Psychiatric Intensive Care Unit by improving communication between staff and carers; and by involving carers more in patients’ care.
Hypothesis:
Among patients admitted to PICU, there is inconsistency in communication with carers and in involving carers in patients’ care. We expect an improvement in these parameters with the quality improvement project.
Background:
Within PICUs, patients with severe psychiatric illness face social isolation. Challenges arise when carers are not engaged, impeding patient support and personalised care. Involving carers becomes crucial for informed decision-making, ensuring both patients and carers actively participate in the care process. National Association of PICUs and The Royal College of Psychiatrists' Guidance for PICU sets out recommendations regarding timelines and types of interventions to be offered to carers.
Methods
Initial baseline data was collected by reviewing patient electronic notes.
We then tested interventions to improve carers’ engagement by using the Plan-Do-Study-Act (PDSA) methodology over 2 cycles. In the first cycle, we engaged the nursing staff by presenting the baseline data and recommendations to improve carers’ engagement. In the second cycle, we introduced an admission protocol to ensure carers were engaged consistently. The parameters assessed were documentation of carers details; contacting carers within 24 hours of admission; documenting carers' views in care plan; inviting carers to Care Plan Approach (CPA) meetings and offer an appointment for carers with staff.
Data was collected after each PDSA cycle to monitor change.
Results
Of the patients admitted to PICU, 29% had their carers’ details documented at baseline, 40% after the first PDSA and 80% after the second PDSA. 42% of carers were contacted within 24 hours of admission at baseline; 66% and 30% after the two PDSA. 50% of carers had their views included in the care plan at baseline; 0% and 30% after the interventions. At baseline, 42% of patients’ carers were invited to the CPA meeting, 66% and 30% after the two PDSA cycles. 50% of patients’ carers were offered an appointment with staff at baseline, 66% and 30% after the two interventions.
Conclusion
As a result of this quality improvement project there has been an improvement in engaging carers of patients admitted to PICU. This was not sustained for the second cycle due to many regular senior staff being on leave during Christmas. The next steps will be to implement this consistently and produce a carers’ information pack.
NASA’s all-sky survey mission, the Transiting Exoplanet Survey Satellite (TESS), is specifically engineered to detect exoplanets that transit bright stars. Thus far, TESS has successfully identified approximately 400 transiting exoplanets, in addition to roughly 6 000 candidate exoplanets pending confirmation. In this study, we present the results of our ongoing project, the Validation of Transiting Exoplanets using Statistical Tools (VaTEST). Our dedicated effort is focused on the confirmation and characterisation of new exoplanets through the application of statistical validation tools. Through a combination of ground-based telescope data, high-resolution imaging, and the utilisation of the statistical validation tool known as TRICERATOPS, we have successfully discovered eight potential super-Earths. These planets bear the designations: TOI-238b (1.61$^{+0.09} _{-0.10}$ R$_\oplus$), TOI-771b (1.42$^{+0.11} _{-0.09}$ R$_\oplus$), TOI-871b (1.66$^{+0.11} _{-0.11}$ R$_\oplus$), TOI-1467b (1.83$^{+0.16} _{-0.15}$ R$_\oplus$), TOI-1739b (1.69$^{+0.10} _{-0.08}$ R$_\oplus$), TOI-2068b (1.82$^{+0.16} _{-0.15}$ R$_\oplus$), TOI-4559b (1.42$^{+0.13} _{-0.11}$ R$_\oplus$), and TOI-5799b (1.62$^{+0.19} _{-0.13}$ R$_\oplus$). Among all these planets, six of them fall within the region known as ‘keystone planets’, which makes them particularly interesting for study. Based on the location of TOI-771b and TOI-4559b below the radius valley we characterised them as likely super-Earths, though radial velocity mass measurements for these planets will provide more details about their characterisation. It is noteworthy that planets within the size range investigated herein are absent from our own solar system, making their study crucial for gaining insights into the evolutionary stages between Earth and Neptune.
Research on the Alternative DSM-5 Model for Personality Disorders (AMPD) in DSM-5's Section-III has demonstrated acceptable interrater reliability, a largely consistent latent structure, substantial correlations with theoretically and clinically relevant measures, and evidence for incremental concurrent and predictive validity after controlling for DSM-5's Section II categorical personality disorders (PDs). However, the AMPD is not yet widely used clinically. One clinician concern may be caseness – that the new model will diagnose a different set of PD patients from that with which they are familiar. The primary aim of this study is to determine whether this concern is valid, by testing how well the two models converge in terms of prevalence and coverage.
Method
Participants were 305 psychiatric outpatients and 302 community residents not currently in mental-health treatment who scored above threshold on the Iowa Personality Disorder Screen (Langbehn et al., 1999). Participants were administered a semi-structured interview for DSM-5 PD, which was scored for both Section II and III PDs.
Results
Convergence across the two PD models was variable for specific PDs, Good when specific PDs were aggregated, and Very Good for ‘any PD.’
Conclusions
Results provide strong evidence that the AMPD yields the same overall prevalence of PD as the current model and, further, identifies largely the same overall population. It also addresses well-known problems of the current model, is more consistent with the ICD-11 PD model, and provides more complete, individualized characterizations of persons with PD, thereby offering multiple reasons for its implementation in clinical settings.
Alterations in cerebral blood flow (CBF) are associated with risk of cognitive decline and Alzheimer’s disease (AD). Although apolipoprotein E (APOE) ε4 and greater vascular risk burden have both been linked to reduced CBF in older adults, less is known about how APOE ε4 status and vascular risk may interact to influence CBF. We aimed to determine whether the effect of vascular risk on CBF varies by gene dose of APOE ε4 alleles (i.e., number of e4 alleles) in older adults without dementia.
Participants and Methods:
144 older adults without dementia from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) underwent arterial spin labeling (ASL) and T1-weighted MRI, APOE genotyping, fluorodeoxyglucose positron emission tomography (FDG-PET), lumbar puncture, and blood pressure assessment. Vascular risk was assessed using pulse pressure (systolic blood pressure -diastolic blood pressure), which is thought to be a proxy for arterial stiffening. Participants were classified by number of APOE ε4 alleles (n0 alleles = 87, m allele = 46, n2 alleles = 11). CBF in six FreeSurfer-derived a priori regions of interest (ROIs) vulnerable to AD were examined: entorhinal cortex, hippocampus, inferior temporal cortex, inferior parietal cortex, rostral middle frontal gyrus, and medial orbitofrontal cortex. Linear regression models tested the interaction between categorical APOE ε4 dose (0, 1, or 2 alleles) and continuous pulse pressure on CBF in each ROI, adjusting for age, sex, cognitive diagnosis (cognitively unimpaired vs. mild cognitive impairment), antihypertensive medication use, cerebral metabolism (FDG-PET composite), reference CBF region (precentral gyrus), and AD biomarker positivity defined using the ADNI-optimized phosphorylated tau/ß-amyloid ratio cut-off of > 0.0251 pg/ml.
Results:
A significant pulse pressure X APOE ε4 dose interaction was found on CBF in the entorhinal cortex, hippocampus, and inferior parietal cortex (ps < .005). Among participants with two e4 alleles, higher pulse pressure was significantly associated with lower CBF (ps < .001). However, among participants with zero or one ε4 allele, there was no significant association between pulse pressure and CBF (ps > .234). No significant pulse pressure X APOE ε4 dose interaction was found in the inferior temporal cortex, rostral middle frontal gyrus, or medial orbitofrontal cortex (ps > .109). Results remained unchanged when additionally controlling for general vascular risk assessed via the modified Hachinski Ischemic Scale.
Conclusions:
These findings demonstrate that the cross-sectional association between pulse pressure and region-specific CBF differs by APOE ε4 dose. In particular, a detrimental effect of elevated pulse pressure on CBF in AD-vulnerable regions was found only among participants with the e4/e4 genotype. Our findings suggest that pulse pressure may play a mechanistic role in neurovascular unit dysregulation for those genetically at greater risk for AD. Given that pulse pressure is just one of many potentially modifiable vascular risk factors for AD, future studies should seek to examine how these other factors (e.g., diabetes, high cholesterol) may interact with APOE genotype to affect cerebrovascular dysfunction.
The Functional Assessment of Cancer Therapy-Cognitive scale (FACT-Cog) is one of the most frequently used patient-reported outcome (PRO) measures of cancer-related cognitive impairment (CRCI) and of CRCI-related impact on quality of life (QOL). Previous studies using the FACT-Cog found that >75% of women with breast cancer (BCa) experience CRCI. Distress tolerance (DT) is a complex construct that encompasses both the perceived capacity (i.e., cognitive appraisal) and the behavioral act of withstanding uncomfortable/aversive/negative emotional or physical experiences. Low DT is associated with psychopathology and executive dysfunction. We previously found that women with BCa with better DT skills reported less CRCI on the FACT-Cog. However, this relationship has not been tested using a performance-based cognitive measure. Therefore, the aims of this study were to: (1) assess the relationship between the FACT-Cog and the Telephone Interview for Cognitive Status (TICS), a performance-based cognitive measure; and (2) test whether the association between DT and CRCI (using the FACT-Cog) was replicated with the TICS.
Participants and Methods:
Participants completed the Distress Tolerance Scale (DTS), the FACT-Cog, and the TICS after undergoing BCa surgery and prior to starting adjuvant therapy [101 women, age >50 years, M(SD)= 61.15(7.76), 43% White Non-Hispanic, 34.4% White Hispanic, 10.8% Black, with nonmetastatic BCa, 55.4% lumpectomy, 36.6% mastectomy; median 29 days post-surgery].
Results:
Although there was a significant correlation between the TICS total score and the FACT-CogQOL subscale (r = 0.347, p < 0.001), the TICS total score was not correlated with scores on the FACT-Cog perceived cognitive impairment (CogPCI), perceived cognitive abilities (CogPCA), or comments from others (CogOth) subscales. However, the TICS memory item, a 10-word list immediate recall task, had a weak statistically significant correlation with CogPCI (r = 0.237, p < 0.032), CogOth (r = 0.223, p < 0.044), and CogPCA (r = 0.233, p < 0.036). Next, the sample was divided based on the participant’s score on TICS memory item (i.e., < vs. > sample mean of 5.09). Results of independent samples t-tests demonstrated significant differences in mean scores for CogPCI, f(80) = -2.09, p = 0.04, Mdt = -7.65, Cohen’s d = 0.483, and CogQOL, f(80) = -2.57, p = 0.01, Mditt = -2.38, Cohen’s d = 0.593. A hierarchical linear regression found that DTS subscale and total scores did not significantly predict performance on the TICS. However, DTS continued to be a significant predictor of poorer FACT-Cog PCI scores while controlling for TICS scores.
Conclusions:
We found a weak relationship between self-reported cognitive impairment and objective cognitive performance (TICS). However, greater self-reported PCI and its impact on QOL was found in participants who scored below the sample mean on a recall task from the TICS. Although perceived ability to tolerate distress continued to predict self-reported PCI on the FACT-Cog, it did not predict overall performance on the TICS. Therefore, responses on the FACT-Cog may be more representative of an individual’s ability to tolerate distress related to perceived CRCI than actual overall cognitive ability or impairment.
In July 2021, Public Health Wales received two notifications of salmonella gastroenteritis. Both cases has attended the same barbecue to celebrate Eid al–Adha, two days earlier. Additional cases attending the same barbecue were found and an outbreak investigation was initiated. The barbecue was attended by a North African community’s social network. On same day, smaller lunches were held in three homes in the social network. Many people attended both a lunch and the barbecue. Cases were defined as someone with an epidemiological link to the barbecue and/or lunches with diarrhoea and/or vomiting with date of onset following these events. We undertook a cohort study of 36 people attending the barbecue and/or lunch, and a nested case-control study using Firth logistic regression. A communication campaign, sensitive towards different cultural practices, was developed in collaboration with the affected community. Consumption of a traditional raw liver dish, ‘marrara’, at the barbecue was the likely vehicle for infection (Firth logistic regression, aOR: 49.99, 95%CI 1.71–1461.54, p = 0.02). Meat and offal came from two local butchers (same supplier) and samples yielded identical whole genome sequences as cases. Future outbreak investigations should be relevant to the community affected by considering dishes beyond those found in routine questionnaires.
Choosing an appropriate electronic data capture system (EDC) is a critical decision for all randomized controlled trials (RCT). In this paper, we document our process for developing and implementing an EDC for a multisite RCT evaluating the efficacy and implementation of an enhanced primary care model for individuals with opioid use disorder who are returning to the community from incarceration.
Methods:
Informed by the Knowledge-to-Action conceptual framework and user-centered design principles, we used Claris Filemaker software to design and implement CRICIT, a novel EDC that could meet the varied needs of the many stakeholders involved in our study.
Results:
CRICIT was deployed in May 2021 and has been continuously iterated and adapted since. CRICIT’s features include extensive participant tracking capabilities, site-specific adaptability, integrated randomization protocols, and the ability to generate both site-specific and study-wide summary reports.
Conclusions:
CRICIT is highly customizable, adaptable, and secure. Its implementation has enhanced the quality of the study’s data, increased fidelity to a complicated research protocol, and reduced research staff’s administrative burden. CRICIT and similar systems have the potential to streamline research activities and contribute to the efficient collection and utilization of clinical research data.
This national pre-pandemic survey compared demand and capacity of adult community eating disorder services (ACEDS) with NHS England (NHSE) commissioning guidance.
Results
Thirteen services in England and Scotland responded (covering 10.7 million population). Between 2016–2017 and 2019–2020 mean referral rates increased by 18.8%, from 378 to 449/million population. Only 3.7% of referrals were from child and adolescent eating disorder services (CEDS-CYP), but 46% of patients were aged 18–25 and 54% were aged >25. Most ACEDS had waiting lists and rationed access. Many could not provide full medical monitoring, adapt treatment for comorbidities, offer assertive outreach or provide seamless transitions. For patient volume, the ACEDS workforce budget was 15%, compared with the NHSE workforce calculator recommendations for CEDS-CYP. Parity required £7 million investment/million population for the ACEDS.
Clinical implications
This study highlights the severe pressure in ACEDS, which has increased since the COVID-19 pandemic. Substantial investment is required to ensure NHS ACEDS meet national guidance, offer evidence-based treatment, reduce risk and preventable deaths, and achieve parity with CEDS-CYP.
The World Health Organization (WHO) has developed and supported numerous initiatives to build capacity and awareness about health emergency and disaster risk management (Health EDRM). These include establishing the Health EDRM Research Network (Health EDRM RN) in 2018 and the publication of the Health EDRM Framework in 2019. These initiatives recognize that research is vital to generating the evidence to inform decision making and research that is integral to disaster preparedness, response and recovery will be vital to delivering the aspirations associated with caring, coping and overcoming in an increasingly challenging world.
Method:
To strengthen the capacity for conduct and use of research, resources were developed by the WHO Guidance on Research Methods for Health EDRM.
Results:
This first WHO textbook on Health EDRM research methods was published in 2021 and updated in 2022 with a chapter on Health EDRM research in the context of COVID-19. The 44 chapters offer practical advice about how to plan, conduct and report on a variety of quantitative and qualitative studies that can inform questions about policies and programs for health-related emergencies and disasters across different settings and level of resources. Case studies of direct relevance to Health EDRM provide real-life examples of research methods and how they have modified policies.
More than 160 authors in 30 countries contributed to the guidance, which is relevant to researchers, would-be researchers, policy makers and practitioners. It should help improve the quality of Health EDRM research; the quality of policy, practice and guidance supported by the evidence generated; and research capacity, collaboration and engagement among researchers, the research community, policy-makers, practitioners and other stakeholders.
Conclusion:
The Guidance is being supplemented by additional resources, including audio podcasts, slideshows, video presentations and webinars, and the content as a whole will be discussed in this presentation.
In order to promote useful and usable scientific evidence for health emergency and disaster risk management (Health EDRM), the World Health Organization (WHO) Health EDRM Knowledge Hub has been established as part of the WHO Thematic Platform for Health EDRM research network (Health EDRM RN). The Knowledge Hub aims to extend scientific knowledge; strengthen evidence-based practice in the management of health risks in emergencies and disasters; create and develop a competent network in the Health EDRM community; and integrate research, policy and practice.
Method:
To begin with, the Knowledge Hub has five interconnected research themes: (1) health data management; (2) psychosocial support; (3) health needs of sub-populations; (4) health workforce development; and (5) research methods. Systematic literature reviews and expert consultations have assessed current research under each theme and identified potential knowledge gaps. The work of the Knowledge Hub is advised by members of the Health EDRM RN and staff in WHO regional offices.
Results:
The WHO Health EDRM Knowledge Hub will be a platform for providing and exchanging up-to-date evidence. This will include information on validated methods for managing health data and identifying health needs in specific subpopulations. The Knowledge Hub will raise awareness of psychosocial support, health workforce development and research before, during and after disasters. It is targeted to policy-makers, researchers, practitioners and the broader community with the aim of accelerating evidence-informed policy and programs. This will support implementation of the Sendai Framework for Disaster Risk Reduction 2015–2030, the WHO Health EDRM Framework, and other related global, regional and national agendas.
Conclusion:
This paper introduces this new initiative and describes its objectives, design, and implementation. Additionally, it provides an overview of the Knowledge Hub and invites session participants to provide insights into their current needs and to make recommendations for improvement.
Children with CHD are at risk for neurodevelopmental delays, and length of hospitalisation is a predictor of poorer long-term outcomes. Multiple aspects of hospitalisation impact neurodevelopment, including sleep interruptions, limited holding, and reduced developmental stimulation. We aimed to address modifiable factors by creating and implementing an interdisciplinary inpatient neurodevelopmental care programme in our Heart Institute.
Methods:
In this quality improvement study, we developed an empirically supported approach to neurodevelopmental care across the continuum of hospitalisation for patients with CHD using three plan-do-study-act cycles. With input from multi-level stakeholders including parents/caregivers, we co-designed interventions that comprised the Cardiac Inpatient Neurodevelopmental Care Optimization (CINCO) programme. These included medical/nursing orders for developmental care practices, developmental kits for patients, bedside developmental plans, caregiver education and support, developmental care rounds, and a specialised volunteer programme. We obtained data from the electronic health record for patients aged 0–2 years admitted for at least 7 days to track implementation.
Results:
There were 619 admissions in 18 months. Utilisation of CINCO interventions increased over time, particularly for the medical/nursing orders and caregiver handouts. The volunteer programme launch was delayed but grew rapidly and within six months, provided over 500 hours of developmental interaction with patients.
Conclusions:
We created and implemented a low-cost programme that systematised and expanded upon existing neurodevelopmental care practices in the cardiac inpatient units. Feasibility was demonstrated through increasing implementation rates over time. Key takeaways include the importance of multi-level stakeholder buy-in and embedding processes in existing clinical workflows.
There are fewer Certified Organic producers in the Mid-South US (southern half of Missouri, western Kentucky and Tennessee, northern Arkansas and eastern Oklahoma) than in other regions of the country such as the Upper Midwest, West Coast, or Northeastern US. Taus et al. (2013) The Professional Geographer 65, 87–102, posit that these clusters suggest regional characteristics impact adoption of organic agriculture and admit that regional studies lack consensus on the role of factors that drive adoption. This paper seeks to understand if there are regionally distinct challenges and opportunities for organic production in the region. Fourteen certified organic producers in Missouri were interviewed and areas of challenges and opportunities specific to their certification were identified within the three a priori themes of (1) biophysical characteristics, (2) marketing infrastructure and (3) financial feasibility. We suggest directions for future policy support from the National Organic Program (NOP) and bolstered feedback structures within the National Organic Standards Board to address regional disparities.
The data safety monitoring board (DSMB), also known the data monitoring committee, is a multidisciplinary team of scientific experts that serve an advisory role within the operation of clinical trials. The principle responsibility of a DSMB is to monitor the conduct of trials for concerns related to participant safety and data quality through the review of interim data analyses, and then to advise trial leadership whether a study is appropriate to continue, needs to modify procedures, or should be terminated. An emphasis is placed on the responsibility of the DSMB to safeguard the rights and well-being of study participants, so it is common to see the incorporation of a DSMB into a study protocol when a trial explores an intervention with high participant risk, a trial is working with a large number of participants, or when a study is working with a particularly vulnerable population, like Alzheimer’s disease (AD) patients. This chapter reviews common DSMB roles and responsibilities as well as highlighting examples of DSMBs in practice in AD clinical research.
This study examined how youth aggressive and delinquent externalizing problem behaviors across childhood and adolescence are connected to consequential psychosocial life outcomes in adulthood. Using data from a longitudinal, high-risk sample (N = 1069) that assessed children and their parents regularly from early childhood (ages 3–5) through adulthood, multilevel growth factors of externalizing behaviors were used to predict adult outcomes (age 24–31), providing a sense of how externalizing problems across development were related to these outcomes via maternal, paternal, teacher, and child report. Findings indicated strong support for the lasting connections between youth externalizing problems with later educational attainment and legal difficulties, spanning informants and enduring beyond other meaningful contributors (i.e., child sex, cognitive ability, parental income and education, parental mental health and relationship quality). Some support was also found, although less consistently, linking externalizing problems and later alcohol use as well as romantic relationship quality. Delinquent/rule-breaking behaviors were often stronger predictors of later outcomes than aggressive behaviors. Taken together, these results indicate the importance of the role youth externalizing behaviors have in adult psychosocial functioning one to two decades later.