We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Transcranial direct current stimulation (tDCS), a noninvasive brain stimulation technique, has shown some promise as a novel treatment approach for a range of mental health disorders, including OCD. This study provides a systematic review of the literature involving randomized controlled trials of tDCS for OCD and evaluates the quality of reporting using the CONSORT (Consolidating Standards of Reporting Trials) statement. This study also examined the outcomes of tDCS as a therapeutic tool for OCD.
Methods:
This systematic review was prospectively registered with PROSPERO (CRD42023426005) and the data collected in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines. The quality of reporting of included studies was evaluated in accordance with the CONSORT statement.
Results:
Eleven randomized controlled trials were identified. Evaluation of the reviewed studies revealed low levels of overall compliance with the CONSORT statement highlighting the need for improved reporting. Key areas included insufficient information about - the intervention (for replicability), participant flow, recruitment, and treatment effect sizes. Study discussions did not fully consider limitations and generalizability, and the discussion/interpretation of the findings were often incongruent with the results and therefore misleading. Only two studies reported a significant difference between sham and active tDCS for OCD outcomes, with small effect sizes noted.
Conclusions:
The variability in protocols, lack of consistency in procedures, combined with limited significant findings, makes it difficult to draw any meaningful conclusions about the effectiveness of tDCS for OCD. Future studies need to be appropriately powered, empirically driven, randomized sham-controlled clinical trials.
According to International Union for the Conservation of Nature (IUCN) guidelines, all species must be assessed against all criteria during the Red Listing process. For organismal groups that are diverse and understudied, assessors face considerable challenges in assembling evidence due to difficulty in applying definitions of key terms used in the guidelines. Challenges also arise because of uncertainty in population sizes (Criteria A, C, D) and distributions (Criteria A2/3/4c, B). Lichens, which are often small, difficult to identify, or overlooked during biodiversity inventories, are one such group for which specific difficulties arise in applying Red List criteria. Here, we offer approaches and examples that address challenges in completing Red List assessments for lichens in a rapidly changing arena of data availability and analysis strategies. While assessors still contend with far from perfect information about individual species, we propose practical solutions for completing robust assessments given the currently available knowledge of individual lichen life-histories.
OBJECTIVES/GOALS: Research emphasizes the importance of play in early childhood to support social, emotional, and physical development. This study explores how the Prescription for Play (P4P) is executed in clinical contexts by analyzing implementation fidelity and contextualizing the adaptations, challenges, and facilitators to the program’s functionality. METHODS/STUDY POPULATION: This project is an ongoing multi-site case study. At the time of study completion in December of 2023, there will be over 40 clinical observations of pediatric well-child check (WCC) visits completed across 7 Federally Qualified Health Centers (FQHC) participating P4P, a play promotion program wherein providers discuss the importance of play in WCC visits and provide a free play kit. All visits are with children 18-36 months old, with a broad demographic spread across sites. Observations are recorded through a guided observation protocol informed by a standard implementation fidelity framework, conducted by 5 researchers. Through inductive thematic analysis, this study will analyze observations of WCC visits to understand the ways providers engage with P4P across sociocultural contexts within FQHCs. RESULTS/ANTICIPATED RESULTS: Preliminary analysis of clinic observations (N = 30) indicates the degree of implementation fidelity varies across sites, with particular variances between WCC visits conducted in English versus non-English languages (NEL). In NEL visits, there were discrepancies among indicators of quality of delivery and participant responsiveness. NEL visits were less likely to have the provider model play with the caregiver and far less likely to open the play kit given to the family. Providers in NEL visits were also less likely to discuss certain benefits of play like brain development and reduced screen time. Across all observations, providers “prescribed play” approximately half the time. As more observations are conducted, researchers anticipate seeing continued differences between English and NEL visits. DISCUSSION/SIGNIFICANCE: From preliminary analysis, discrepancies in implementation fidelity indicate the P4P intervention may require adaptation and additional training related to how to prescribe and discuss play in WCC visits conducted in NEL visits. Additionally, this study elucidates the impact language can have on the fidelity of clinical interventions.
Background: Historically, diagnosis of urinary tract infections (UTIs) has been divided into 3 categories based on symptoms and urine culture results: not UTI, asymptomatic bacteriuria (ASB), or UTI. However, some populations (eg, older adults, catheterized patients) may not present with signs or symptoms referrable to the urinary tract or have chronic lower urinary tract symptoms (LUTS), making the diagnosis of UTI challenging. We sought to understand the clinical presentation of patients who receive urine tests in a cohort of diverse hospitals. Methods: This retrospective descriptive cohort study included all adult noncatheterized inpatient and ED encounters with paired urinalysis and urine cultures (24 hours apart) from 5 community and academic hospitals in 3 states (NC, VA, GA) between January 1, 2017, and December 31, 2019. Trained abstractors collected clinical and demographic data using a 60-question REDCap survey. The study group met with multidisciplinary experts (ID, geriatrics, urology) to define the “continuum of UTI” (Table 1), which includes 2 new categories: (1) LUTS to capture patients with chronic lower urinary tract symptoms and (2) bacteriuria of unclear significance (BUS) to capture patients who do not clinically meet criteria for ASB or UTI (eg, older adults who present with delirium and bacteriuria). The newly defined categories were compared to current guideline-based categories. We further compared ASB, BUS, and UTI categories using a lower bacterial threshold of 1,000 colony-forming units. Results: In total, 220,531 encounters met study criteria. After using a random number generator and removing duplicates, 3,392 encounters were included. Based on current IDSA guidelines, the prevalence of ASB was 32.1% (n = 975), and prevalence of patients with “not UTI” was 1,614 (53%). Applying the expert panel’s new “continuum of UTI” definitions, the prevalence of “not UTI” patients decreased to 1,147 (37.7%), due to reassignment of 467 patients (15.3%)to LUTS. The prevalence of ASB decreased by 24% due to reassignment to BUS. Lowering the bacterial threshold had a slight impact on the number of definitive UTIs (14.9 vs 15.9%) (Table 1). Conclusions: Our rigorous review of laboratory and symptom data from a diverse population dataset revealed that diagnostic uncertainty exists when assessing patients with suspicion for UTI. We propose moving away from dichotomous approach of ASB versus UTI and using the “continuum of UTI” for stewardship conversations. This approach will allow us to develop nuanced deprescribing interventions for patients with LUTS or BUS (eg, watchful waiting, shorter course therapy) that account for the unique characteristics of these populations.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
International studies have demonstrated associations between sleep problems and poor psychological well-being; however, Canadian data are limited. This study investigated this association using cross-sectional baseline data from the Canadian Longitudinal Study on Aging, a national survey of 30,097 community-dwelling adults, 45–85 years of age. Short sleep duration, sleep dissatisfaction, insomnia symptoms, and daytime impairment were consistently associated with a higher prevalence of dissatisfaction with life, psychological distress, and poor self-reported mental health. Long sleep duration was associated with a higher prevalence of psychological distress and poor self-reported mental health, but not with dissatisfaction with life. Associations between sleep problems and psychological distress were 11–18 per cent stronger in males. With each 10-year increase in age, the association between daytime impairment and life dissatisfaction increased by 11 per cent, and insomnia symptoms and poor mental health decreased by 11 per cent. Sleep problems in middle-aged and older adults warrant increased attention as a public health problem in Canada.
Many male prisoners have significant mental health problems, including anxiety and depression. High proportions struggle with homelessness and substance misuse.
Aims
This study aims to evaluate whether the Engager intervention improves mental health outcomes following release.
Method
The design is a parallel randomised superiority trial that was conducted in the North West and South West of England (ISRCTN11707331). Men serving a prison sentence of 2 years or less were individually allocated 1:1 to either the intervention (Engager plus usual care) or usual care alone. Engager included psychological and practical support in prison, on release and for 3–5 months in the community. The primary outcome was the Clinical Outcomes in Routine Evaluation Outcome Measure (CORE-OM), 6 months after release. Primary analysis compared groups based on intention-to-treat (ITT).
Results
In total, 280 men were randomised out of the 396 who were potentially eligible and agreed to participate; 105 did not meet the mental health inclusion criteria. There was no mean difference in the ITT complete case analysis between groups (92 in each arm) for change in the CORE-OM score (1.1, 95% CI –1.1 to 3.2, P = 0.325) or secondary analyses. There were no consistent clinically significant between-group differences for secondary outcomes. Full delivery was not achieved, with 77% (108/140) receiving community-based contact.
Conclusions
Engager is the first trial of a collaborative care intervention adapted for prison leavers. The intervention was not shown to be effective using standard outcome measures. Further testing of different support strategies for prison with mental health problems is needed.
Depression is a common and serious mental illness that begins early in life. An association between cardiovascular disease (CVD) and subsequent depression is clear in adults. We examined associations between individual CVD risk factors and depression in young people.
Methods
We searched MEDLINE, EMBASE, and PsycINFO databases from inception to 1 January 2020. We extracted data from cohort studies assessing the longitudinal association between CVD risk factors [body mass index (BMI), smoking, systolic blood pressure (SBP), total cholesterol, high-density lipoprotein] and depression, measured using a validated tool in individuals with mean age of 24 years or younger. Random effect meta-analysis was used to combine effect estimates from individual studies, including odds ratio (OR) for depression and standardised mean difference for depressive symptoms.
Results
Based on meta-analysis of seven studies, comprising 15 753 participants, high BMI was associated with subsequent depression [pooled OR 1.61; 95% confidence interval (CI) 1.21–2.14; I2 = 31%]. Based on meta-analysis of eight studies, comprising 30 539 participants, smoking was associated with subsequent depression (pooled OR 1.73; 95% CI 1.36–2.20; I2 = 74%). Low, but not high, SBP was associated with an increased risk of depression (pooled OR 3.32; 95% CI 1.68–6.55; I2 = 0%), although this was based on a small pooled high-risk sample of 893 participants. Generalisability may be limited as most studies were based in North America or Europe.
Conclusions
Targeting childhood/adolescent smoking and obesity may be important for the prevention of both CVD and depression across the lifespan. Further research on other CVD risk factors including blood pressure and cholesterol in young people is required.
To assess the relationship between food insecurity, sleep quality, and days with mental and physical health issues among college students.
Design:
An online survey was administered. Food insecurity was assessed using the ten-item Adult Food Security Survey Module. Sleep was measured using the nineteen-item Pittsburgh Sleep Quality Index (PSQI). Mental health and physical health were measured using three items from the Healthy Days Core Module. Multivariate logistic regression was conducted to assess the relationship between food insecurity, sleep quality, and days with poor mental and physical health.
Setting:
Twenty-two higher education institutions.
Participants:
College students (n 17 686) enrolled at one of twenty-two participating universities.
Results:
Compared with food-secure students, those classified as food insecure (43·4 %) had higher PSQI scores indicating poorer sleep quality (P < 0·0001) and reported more days with poor mental (P < 0·0001) and physical (P < 0·0001) health as well as days when mental and physical health prevented them from completing daily activities (P < 0·0001). Food-insecure students had higher adjusted odds of having poor sleep quality (adjusted OR (AOR): 1·13; 95 % CI 1·12, 1·14), days with poor physical health (AOR: 1·01; 95 % CI 1·01, 1·02), days with poor mental health (AOR: 1·03; 95 % CI 1·02, 1·03) and days when poor mental or physical health prevented them from completing daily activities (AOR: 1·03; 95 % CI 1·02, 1·04).
Conclusions:
College students report high food insecurity which is associated with poor mental and physical health, and sleep quality. Multi-level policy changes and campus wellness programmes are needed to prevent food insecurity and improve student health-related outcomes.
We describe the frequency of pediatric healthcare-associated infections (HAIs) identified through prospective surveillance in community hospitals participating in an infection control network. Over a 6-year period, 84 HAIs were identified. Of these 51 (61%) were pediatric central-line–associated bloodstream infections, and they often occurred in children <1 year of age.
The coronavirus disease 2019 (COVID-19) pandemic has resulted in the acceleration of telehealth and remote environments as stakeholders and healthcare systems respond to the threat of this disease. How can infectious diseases and healthcare epidemiology expertise be adapted to support safe care for all?
Background: Central-line–associated bloodstream infections (CLABSIs) are a significant contributor to morbidity and mortality for neonates; they also increased healthcare costs and duration of hospitalization. This population is susceptible to infections because of their undeveloped immune systems, and they require intravenous access until they can tolerate enteral feedings, which for extremely premature infants can take several weeks (if not months) to achieve. Our hospital is a regional-referral teaching hospital with 772 licensed beds. The neonatal intensive care unit (NICU) is a level 3, 35-bed unit where the most critically ill neonates receive care. After a sustained 3-year period of zero CLABSIs, we identified 10 infections between September 2016 through April 2018. Methods: A multidisciplinary team known as the neonatal infection prevention team (NIPT) was reinstated. This team included members from nursing and infection prevention (IP) and from NICU Shared Governance, as well as a neonatal nurse practitioner (NNP) and a neonatologist to review these CLABSIs. Evidence-based practices, policies, and procedures were implemented to help reduce CLABSIs. Nurse educators provided education and training. The infection prevention team reinstated and modified the central-line maintenance and insertion tools to document compliance and to identify any gaps in care. Nurses were expected to document line maintenance once per shift (a.m. and p.m.). All CLABSIs were entered into the CDC NHSN and the hospital’s safety event reporting system, which required follow-up by a clinical manager. The infection prevention team monitored NHSN standardized infection ratios (SIRs) monthly. The SIR is the number of observed events divided by the number predicted (calculated based on national aggregate data). Results: The highest reported quarterly SIR was 1.423, which occurred in the third quarter of 2018 (Fig. 1). Overall compliance with line maintenance protocols was 86% on the morning shift and 89% on the afternoon shift. With implementation of an evidence-based bundle, the NICU had a rolling 12-month SIR of 0.00 as of October 2019. Conclusions: Multidisciplinary team development, implementation of evidence-based bundle elements, and education on catheter care contributed to the long-term success in decreasing CLABSI rates in our NICU. Although this implementation achieved a zero CLABSI rate, we experienced some barriers, including compliance issues with staff not completing the audit tools, staff turnover, and high patient census.
The utility and efficacy of bolus dose vasopressors in hemodynamically unstable patients is well-established in the fields of general anesthesia and obstetrics. However, in the prehospital setting, minimal evidence for bolus dose vasopressor use exists and is primarily limited to critical care transport use. Hypotensive episodes, whether traumatic, peri-intubation-related, or septic, increase patient mortality. The purpose of this study is to assess the efficacy and adverse events associated with prehospital bolus dose epinephrine use in non-cardiac arrest, hypotensive patients treated by a single, high-volume, ground-based Emergency Medical Services (EMS) agency.
Methods:
This is a retrospective, observational study of all non-cardiac arrest EMS patients treated for hypotension using bolus dose epinephrine from September 12, 2018 through September 12, 2019. Inclusion criteria for treatment with bolus dose epinephrine required a systolic blood pressure (SBP) measurement <90mmHg. A dose of 20mcg every two minutes, as needed, was allowed per protocol. The primary data source was the EMS electronic medical record.
Results:
Forty-two patients were treated under the protocol with a median (IQR) initial SBP immediately prior to treatment of 78mmHg (65-86) and a median (IQR) initial mean arterial pressure (MAP) of 58mmHg (50-66). The post-bolus SBP and MAP increased to 93mmHg (75-111) and 69mmHg (59-83), respectively. The two most common patient presentations requiring protocol use were altered mental status (55%) and respiratory failure (31%). Over one-half of the patients treated required both advanced airway management (62%) and multiple bolus doses of vasopressor support (55%). A single episode of transient severe hypertension (SBP>180mmHg) occurred, but there were no episodes of unstable tachyarrhythmia or cardiac arrest while en route or upon arrival to the receiving hospitals.
Conclusion:
These preliminary data suggest that the administration of bolus dose epinephrine may be effective at rapidly augmenting hypotension in the prehospital setting with a minimal incidence of adverse events. Paramedic use of bolus dose epinephrine successfully increased SBP and MAP without clinically significant side effects. Prospective studies with larger sample sizes are needed to further investigate the effects of prehospital bolus dose epinephrine on patient morbidity and mortality.
As uncertainty remains about whether clinical response influences cognitive function after electroconvulsive therapy (ECT) for depression, we examined the effect of remission status on cognitive function in depressed patients 4 months after a course of ECT.
Method
A secondary analysis was undertaken on participants completing a randomised controlled trial of ketamine augmentation of ECT for depression who were categorised by remission status (MADRS ⩽10 v. >10) 4 months after ECT. Cognition was assessed with self-rated memory and neuropsychological tests of anterograde verbal and visual memory, autobiographical memory, verbal fluency and working memory. Patients were assessed through the study, healthy controls on a single occasion, and compared using analysis of variance.
Results
At 4-month follow-up, remitted patients (N = 18) had a mean MADRS depression score of 3.8 (95% CI 2.2–5.4) compared with 27.2 (23.0–31.5) in non-remitted patients (N = 19), with no significant baseline differences between the two groups. Patients were impaired on all cognitive measures at baseline. There was no deterioration, with some measures improving, 4-months after ECT, at which time remitted patients had significantly improved self-rated memory, anterograde verbal memory and category verbal fluency compared with those remaining depressed. Self-rated memory correlated with category fluency and autobiographical memory at follow-up.
Conclusions
We found no evidence of persistent impairment of cognition after ECT. Achieving remission improved subjective memory and verbal memory recall, but other aspects of cognitive function were not influenced by remission status. Self-rated memory may be useful to monitor the effects of ECT on longer-term memory.
Perfectionism is a transdiagnostic risk factor across psychopathology. The Clinical Perfectionism Questionnaire (CPQ) was developed to assess change in order to provide clinical utility, but currently the psychometric properties of the CPQ with adolescents is unknown.
Aims:
To assess the factor structure and construct validity of the CPQ in female adolescents.
Method:
The CPQ was administered to 267 females aged 14–19 years of age. Confirmatory factor analysis (CFA) was used to examine the validity of the two-factor model and a second-order factor model. Pearson correlations were used to evaluate the relationships between the CPQ and a wide range of measures of perfectionism, psychopathology and personality traits.
Results:
The study demonstrated internal consistency, construct validity and incremental validity of the CPQ in a sample of female adolescents. The CFA in the present study confirmed the two-factor model of the CPQ with Factor 1 relating to perfectionistic strivings and Factor 2 representing perfectionistic concerns. The second-order two factor model indicated no deterioration in fit.
Conclusions:
The two-factor model of the CPQ fits with the theoretical definition of clinical perfectionism where the over-dependence of self-worth on achievement and concern over mistakes are key elements. The CPQ is suitable for use with female adolescents in future research that seeks to better understand the role of perfectionism in the range of mental illnesses that impact youth.
Ethnic minority groups often have more complex and aversive pathways to mental health care. However, large population-based studies are lacking, particularly regarding involuntary hospitalisation. We sought to examine the risk of involuntary admission among first-generation ethnic minority groups with early psychosis in Ontario, Canada.
Methods
Using health administrative data, we constructed a retrospective cohort (2009–2013) of people with first-onset non-affective psychotic disorder aged 16–35 years. This cohort was linked to immigration data to ascertain migrant status and country of birth. We identified the first involuntary admission within 2 years and compared the risk of involuntary admission for first-generation migrant groups to the general population. To control for the role of migrant status, we restricted the sample to first-generation migrants and examined differences by country of birth, comparing risk of involuntary admission among ethnic minority groups to a European reference. We further explored the role of migrant class by adjusting for immigrant vs refugee status within the migrant cohort. We also explored effect modification of migrant class by ethnic minority group.
Results
We identified 15 844 incident cases of psychotic disorder, of whom 19% (n = 3049) were first-generation migrants. Risk of involuntary admission was higher than the general population in five of seven ethnic minority groups. African and Caribbean migrants had the highest risk of involuntary admission (African: risk ratio (RR) = 1.52, 95% CI = 1.34–1.73; Caribbean: RR = 1.58, 95% CI = 1.37–1.82), and were the only groups where the elevated risk persisted when compared to the European reference group within the migrant cohort (African: RR = 1.24, 95% CI = 1.04–1.48; Caribbean: RR = 1.29, 95% CI = 1.07–1.56). Refugee status was independently associated with involuntary admission (RR = 1.16, 95% CI = 1.02–1.32); however, this risk varied by ethnic minority group, with Caribbean refugees having an elevated risk of involuntary admission compared with Caribbean immigrants (RR = 1.72, 95% CI = 1.15–2.58).
Conclusions
Our findings are consistent with the international literature showing increased rates of involuntary admission among some ethnic minority groups with early psychosis. Interventions aimed at improving pathways to care could be targeted at these groups to reduce disparities.
Regional to global high-resolution correlation and timing is critical when attempting to answer important geological questions, such as the greenhouse to icehouse transition that occurred during the Eocene–Oligocene boundary transition. Timing of these events on a global scale can only be answered using correlation among many sections, and multiple correlation proxies, including biostratigraphy, lithostratigraphy, geochemistry and geophysical methods. Here we present litho- and biostratigraphy for five successions located in the southeastern USA. To broaden the scope of correlation, we also employ carbon and oxygen stable isotope and magnetic susceptibility (χ) data to interpret these sections regionally, and correlate to the Global Boundary Stratotype Section and Point (GSSP) near Massignano in central Italy. Our results indicate that approaching the Eocene–Oligocene boundary, climate warmed slightly, but then δ18O data exhibit an abrupt c. +5 ‰ positive shift towards cooling that reached a maximum c. 1 m below the boundary at St Stephens Quarry, Alabama. This shift was accompanied by a c. −3 ‰ negative shift in δ13C interpreted to indicate environmental changes associated with the onset of the Eocene–Oligocene boundary planktonic foraminiferal extinction event. The observed cold pulse may be responsible for the final extinction of Hantkeninidae, used to define the beginning of the Rupelian Stage. Immediately preceding the boundary, Hantkeninidae species dropped significantly in abundance and size (pre-extinction dwarfing occurring before the final Eocene–Oligocene extinctions), and these changes may be the reason for inconsistencies in past Eocene–Oligocene boundary placement in the southeastern USA.
Background: Mindfulness-based cognitive therapy (MBCT) has evidence of efficacy in a range of populations, but few studies to date have reported on MBCT for treatment of anxious and depressive symptoms in Parkinson's disease (PD). Aims: The aim of this study was to examine the efficacy of modified MBCT in reducing symptoms of anxiety and depression and improving quality of life in PD. Method: Thirty-six individuals with PD were randomly assigned to either modified MBCT or a waitlist control. Changes in symptoms of anxiety, depression and quality of life were compared at group level using generalized linear mixed models and at individual level using reliable change analysis. Results: At post-treatment, there was a significant reduction in depressive symptoms for people undertaking modified MBCT at both group and individual levels compared with controls. There was no significant effect on anxiety or quality of life at the group level, although significantly more people had reliable improvement in anxiety after modified MBCT than after waitlist. Significantly more waitlist participants had reliable deterioration in symptoms of anxiety and depression than those completing modified MBCT. Most participants stayed engaged in modified MBCT, with only three drop-outs. Discussion: This proof-of-concept study demonstrates the potential efficacy of modified MBCT as a treatment for depressive symptoms in Parkinson's disease and suggests further research is warranted.