We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
Background: Nipocalimab is a human IgG1 monoclonal antibody targeting FcRn that selectively reduces IgG levels without impacting antigen presentation, T- and B-cell functions. This study describes the effect of nipocalimab on vaccine response. Methods: Open-label, parallel, interventional study randomized participants 1:1 to receive intravenous 30mg/kg nipocalimab at Week0 and 15mg/kg at Week2 and Week4 (active) or no drug (control). On Day 3, participants received Tdap and PPSV®23 vaccinations and were followed through Wk16. Results: Twenty-nine participants completed the study and are included (active, n=15; control, n=14). Participants with a positive anti-tetanus IgG response was comparable between groups at Wk2 and Wk16, but lower at Wk4 (nipocalimab 3/15 [20%] vs control 7/14 [50%]; P=0.089). All maintained anti-tetanus IgG above the protective threshold (0.16IU/mL) through Wk16. While anti-pneumococcal-capsular-polysaccharide (PCP) IgG levels were lower during nipocalimab treatment, the percent increase from baseline at Wk2 and Wk16 was comparable between groups. Post-vaccination, anti-PCP IgG remained above 50mg/L and showed a 2-fold increase from baseline throughout the study in both groups. Nipocalimab co-administration with vaccines was safe and well-tolerated. Conclusions: These findings suggest that nipocalimab does not impact the development of an adequate IgG response to T-cell–dependent/independent vaccines and that nipocalimab-treated patients can follow recommended vaccination schedules.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
Objectives/Goals: We describe the prevalence of individuals with household exposure to SARS-CoV-2, who subsequently report symptoms consistent with COVID-19, while having PCR results persistently negative for SARS-CoV-2 (S[+]/P[-]). We assess whether paired serology can assist in identifying the true infection status of such individuals. Methods/Study Population: In a multicenter household transmission study, index patients with SARS-CoV-2 were identified and enrolled together with their household contacts within 1 week of index’s illness onset. For 10 consecutive days, enrolled individuals provided daily symptom diaries and nasal specimens for polymerase chain reaction (PCR). Contacts were categorized into 4 groups based on presence of symptoms (S[+/-]) and PCR positivity (P[+/-]). Acute and convalescent blood specimens from these individuals (30 days apart) were subjected to quantitative serologic analysis for SARS-CoV-2 anti-nucleocapsid, spike, and receptor-binding domain antibodies. The antibody change in S[+]/P[-] individuals was assessed by thresholds derived from receiver operating characteristic (ROC) analysis of S[+]/P[+] (infected) versusS[-]/P[-] (uninfected). Results/Anticipated Results: Among 1,433 contacts, 67% had ≥1 SARS-CoV-2 PCR[+] result, while 33% remained PCR[-]. Among the latter, 55% (n = 263) reported symptoms for at least 1 day, most commonly congestion (63%), fatigue (63%), headache (62%), cough (59%), and sore throat (50%). A history of both previous infection and vaccination was present in 37% of S[+]/P[-] individuals, 38% of S[-]/P[-], and 21% of S[+]/P[+] (P<0.05). Vaccination alone was present in 37%, 41%, and 52%, respectively. ROC analyses of paired serologic testing of S[+]/P[+] (n = 354) vs. S[-]/P[-] (n = 103) individuals found anti-nucleocapsid data had the highest area under the curve (0.87). Based on the 30-day antibody change, 6.9% of S[+]/P[-] individuals demonstrated an increased convalescent antibody signal, although a similar seroresponse in 7.8% of the S[-]/P[-] group was observed. Discussion/Significance of Impact: Reporting respiratory symptoms was common among household contacts with persistent PCR[-] results. Paired serology analyses found similar seroresponses between S[+]/P[-] and S[-]/P[-] individuals. The symptomatic-but-PCR-negative phenomenon, while frequent, is unlikely attributable to true SARS-CoV-2 infections that go missed by PCR.
To improve early intervention and personalise treatment for individuals early on the psychosis continuum, a greater understanding of symptom dynamics is required. We address this by identifying and evaluating the movement between empirically derived attenuated psychotic symptomatic substates—clusters of symptoms that occur within individuals over time.
Methods
Data came from a 90-day daily diary study evaluating attenuated psychotic and affective symptoms. The sample included 96 individuals aged 18–35 on the psychosis continuum, divided into four subgroups of increasing severity based on their psychometric risk of psychosis, with the fourth meeting ultra-high risk (UHR) criteria. A multilevel hidden Markov modelling (HMM) approach was used to characterise and determine the probability of switching between symptomatic substates. Individual substate trajectories and time spent in each substate were subsequently assessed.
Results
Four substates of increasing psychopathological severity were identified: (1) low-grade affective symptoms with negligible psychotic symptoms; (2) low levels of nonbizarre ideas with moderate affective symptoms; (3) low levels of nonbizarre ideas and unusual thought content, with moderate affective symptoms; and (4) moderate levels of nonbizarre ideas, unusual thought content, and affective symptoms. Perceptual disturbances predominantly occurred within the third and fourth substates. UHR individuals had a reduced probability of switching out of the two most severe substates.
Conclusions
Findings suggest that individuals reporting unusual thought content, rather than nonbizarre ideas in isolation, may exhibit symptom dynamics with greater psychopathological severity. Individuals at a higher risk of psychosis exhibited persistently severe symptom dynamics, indicating a potential reduction in psychological flexibility.
Increasing daylight exposure might be a simple way to improve mental health. However, little is known about daylight-symptom associations in depressive disorders.
Methods
In a subset of the Australian Genetics of Depression Study (N = 13,480; 75% female), we explored associations between self-reported number of hours spent in daylight on a typical workday and free day and seven symptom dimensions: depressive (overall, somatic, psychological); hypo-manic-like; psychotic-like; insomnia; and daytime sleepiness. Polygenic scores for major depressive disorder (MDD); bipolar disorder (BD); and schizophrenia (SCZ) were calculated. Models were adjusted for age, sex, shift work status, employment status, season, and educational attainment. Exploratory analyses examined age-stratified associations (18–24 years; 25–34 years; 35–64 years; 65 and older). Bonferroni-corrected associations (p < 0.004) are discussed.
Results
Adults with depression reported spending a median of one hour in daylight on workdays and three hours on free days. More daylight exposure on workdays and free days was associated with lower depressive (overall, psychological, somatic) and insomnia symptoms (p’s<0.001), but higher hypo-manic-like symptoms (p’s<0.002). Genetic loading for MDD and SCZ were associated with less daylight exposure in unadjusted correlational analyses (effect sizes were not meaningful). Exploratory analyses revealed age-related heterogeneity. Among 18–24-year-olds, no symptom dimensions were associated with daylight. By contrast, for the older age groups, there was a pattern of more daylight exposure and lower insomnia symptoms (p < 0.003) (except for 25–34-year-olds on free days, p = 0.019); and lower depressive symptoms with more daylight on free days, and to some extent workdays (depending on the age-group).
Conclusions
Exploration of the causal status of daylight in depression is warranted.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
Around the world, people living in objectively difficult circumstances who experience symptoms of generalized anxiety disorder (GAD) do not qualify for a diagnosis because their worry is not ‘excessive’ relative to the context. We carried out the first large-scale, cross-national study to explore the implications of removing this excessiveness requirement.
Methods
Data come from the World Health Organization World Mental Health Survey Initiative. A total of 133 614 adults from 12 surveys in Low- or Middle-Income Countries (LMICs) and 16 surveys in High-Income Countries (HICs) were assessed with the Composite International Diagnostic Interview. Non-excessive worriers meeting all other DSM-5 criteria for GAD were compared to respondents meeting all criteria for GAD, and to respondents without GAD, on clinically-relevant correlates.
Results
Removing the excessiveness requirement increases the global lifetime prevalence of GAD from 2.6% to 4.0%, with larger increases in LMICs than HICs. Non-excessive and excessive GAD cases worry about many of the same things, although non-excessive cases worry more about health/welfare of loved ones, and less about personal or non-specific concerns, than excessive cases. Non-excessive cases closely resemble excessive cases in socio-demographic characteristics, family history of GAD, and risk of temporally secondary comorbidity and suicidality. Although non-excessive cases are less severe on average, they report impairment comparable to excessive cases and often seek treatment for GAD symptoms.
Conclusions
Individuals with non-excessive worry who meet all other DSM-5 criteria for GAD are clinically significant cases. Eliminating the excessiveness requirement would lead to a more defensible GAD diagnosis.
Suicide prevention strategies have shifted in many countries, from a national approach to one that is regionally tailored and responsive to local community needs. Previous Australian studies support this approach. However, most studies have focused on suicide deaths which may not fully capture a complete understanding of prevention needs, and few have focused on the priority population of youth. This was the first nationwide study to examine regional variability of self-harm prevalence and related factors in Australian young people.
Methods
A random sample of Australian adolescents (12–17-year-olds) were recruited as part of the Young Minds Matter (YMM) survey. Participants completed self-report questions on self-harm (i.e., non-suicidal self-harm and suicide attempts) in the previous 12 months. Using mixed effects regressions, an area-level model was built with YMM and Census data to produce out-of-sample small area predictions for self-harm prevalence. Spatial unit of analysis was Statistical Area Level 1 (average population 400 people), and all prevalence estimates were updated to 2019.
Results
Across Australia, there was large variability in youth self-harm prevalence estimates. Northern Territory, Western Australia, and South Australia had the highest estimated state prevalence. Psychological distress and depression were factors which best predicted self-harm at an individual level. At an area-level, the strongest predictor was a high percentage of single unemployed parents, while being in an area where ≥30% of parents were born overseas was associated with reduced odds of self-harm.
Conclusions
This study identified characteristics of regions with lower and higher youth self-harm risk. These findings should assist governments and communities with developing and implementing regionally appropriate youth suicide prevention interventions and initiatives.
Epidemiological data offer conflicting views of the natural course of binge-eating disorder (BED), with large retrospective studies suggesting a protracted course and small prospective studies suggesting a briefer duration. We thus examined changes in BED diagnostic status in a prospective, community-based study that was larger and more representative with respect to sex, age of onset, and body mass index (BMI) than prior multi-year prospective studies.
Methods
Probands and relatives with current DSM-IV BED (n = 156) from a family study of BED (‘baseline’) were selected for follow-up at 2.5 and 5 years. Probands were required to have BMI > 25 (women) or >27 (men). Diagnostic interviews and questionnaires were administered at all timepoints.
Results
Of participants with follow-up data (n = 137), 78.1% were female, and 11.7% and 88.3% reported identifying as Black and White, respectively. At baseline, their mean age was 47.2 years, and mean BMI was 36.1. At 2.5 (and 5) years, 61.3% (45.7%), 23.4% (32.6%), and 15.3% (21.7%) of assessed participants exhibited full, sub-threshold, and no BED, respectively. No participants displayed anorexia or bulimia nervosa at follow-up timepoints. Median time to remission (i.e. no BED) exceeded 60 months, and median time to relapse (i.e. sub-threshold or full BED) after remission was 30 months. Two classes of machine learning methods did not consistently outperform random guessing at predicting time to remission from baseline demographic and clinical variables.
Conclusions
Among community-based adults with higher BMI, BED improves with time, but full remission often takes many years, and relapse is common.
To explore the differences in social norms around parents’ food provision in different provision contexts and by demographics.
Design:
Qualitative study using story completion methodology via an online survey in September 2021. Adults 18+ with or without children were randomised to one of three story stems focusing on food provision in different contexts; food provision at home (non-visitor), with visitors present and with the involvement of sport. Stories were coded and themed using thematic analysis. A content analysis was performed to determine count and frequency of codes in stories by participant demographics and story assumptions.
Setting:
Australia.
Participants:
Adults (n 196).
Results:
Nine themes were identified from the data resulting in four social norms around providing healthy foods and justifying non-adherence to healthy eating guidelines, evolution of family life and mealtime values, the presence of others influencing how we engage with food provision and unhealthy foods used as incentives/rewards in sport. Following content analysis, no differences of themes or norms by participant demographics or story assumptions were found.
Conclusions:
We identified pervasive social norms around family food provision and further identified how contextual factors resulted in variations or distinct norms. This highlights the impact context may have on the social norms parents face when providing food to their children and the opportunities and risks of leveraging these social norms to influence food choice in these contexts. Public health interventions and practitioners should understand the influence of context and social environments when promoting behaviour change and providing individualised advice. Future research could explore parents’ experiences of these norms and to what extent they impact food choice.
Clinical outcomes of repetitive transcranial magnetic stimulation (rTMS) for treatment of treatment-resistant depression (TRD) vary widely and there is no mood rating scale that is standard for assessing rTMS outcome. It remains unclear whether TMS is as efficacious in older adults with late-life depression (LLD) compared to younger adults with major depressive disorder (MDD). This study examined the effect of age on outcomes of rTMS treatment of adults with TRD. Self-report and observer mood ratings were measured weekly in 687 subjects ages 16–100 years undergoing rTMS treatment using the Inventory of Depressive Symptomatology 30-item Self-Report (IDS-SR), Patient Health Questionnaire 9-item (PHQ), Profile of Mood States 30-item, and Hamilton Depression Rating Scale 17-item (HDRS). All rating scales detected significant improvement with treatment; response and remission rates varied by scale but not by age (response/remission ≥ 60: 38%–57%/25%–33%; <60: 32%–49%/18%–25%). Proportional hazards models showed early improvement predicted later improvement across ages, though early improvements in PHQ and HDRS were more predictive of remission in those < 60 years (relative to those ≥ 60) and greater baseline IDS burden was more predictive of non-remission in those ≥ 60 years (relative to those < 60). These results indicate there is no significant effect of age on treatment outcomes in rTMS for TRD, though rating instruments may differ in assessment of symptom burden between younger and older adults during treatment.
Loss of control eating is more likely to occur in the evening and is uniquely associated with distress. No studies have examined the effect of treatment on within-day timing of loss of control eating severity. We examined whether time of day differentially predicted loss of control eating severity at baseline (i.e. pretreatment), end-of-treatment, and 6-month follow-up for individuals with binge-eating disorder (BED), hypothesizing that loss of control eating severity would increase throughout the day pretreatment and that this pattern would be less pronounced following treatment. We explored differential treatment effects of cognitive-behavioral guided self-help (CBTgsh) and Integrative Cognitive-Affective Therapy (ICAT).
Methods
Individuals with BED (N = 112) were randomized to receive CBTgsh or ICAT and completed a 1-week ecological momentary assessment protocol at baseline, end-of-treatment, and 6-month follow-up to assess loss of control eating severity. We used multilevel models to assess within-day slope trajectories of loss of control eating severity across assessment periods and treatment type.
Results
Within-day increases in loss of control eating severity were reduced at end-of-treatment and 6-month follow-up relative to baseline. Evening acceleration of loss of control eating severity was greater at 6-month follow-up relative to end-of-treatment. Within-day increases in loss of control severity did not differ between treatments at end-of-treatment; however, evening loss of control severity intensified for individuals who received CBTgsh relative to those who received ICAT at 6-month follow-up.
Conclusions
Findings suggest that treatment reduces evening-shifted loss of control eating severity, and that this effect may be more durable following ICAT relative to CBTgsh.
Patients tested for Clostridioides difficile infection (CDI) using a 2-step algorithm with a nucleic acid amplification test (NAAT) followed by toxin assay are not reported to the National Healthcare Safety Network as a laboratory-identified CDI event if they are NAAT positive (+)/toxin negative (−). We compared NAAT+/toxin− and NAAT+/toxin+ patients and identified factors associated with CDI treatment among NAAT+/toxin− patients.
Design:
Retrospective observational study.
Setting:
The study was conducted across 36 laboratories at 5 Emerging Infections Program sites.
Patients:
We defined a CDI case as a positive test detected by this 2-step algorithm during 2018–2020 in a patient aged ≥1 year with no positive test in the previous 8 weeks.
Methods:
We used multivariable logistic regression to compare CDI-related complications and recurrence between NAAT+/toxin− and NAAT+/toxin+ cases. We used a mixed-effects logistic model to identify factors associated with treatment in NAAT+/toxin− cases.
Results:
Of 1,801 cases, 1,252 were NAAT+/toxin−, and 549 were NAAT+/toxin+. CDI treatment was given to 866 (71.5%) of 1,212 NAAT+/toxin− cases versus 510 (95.9%) of 532 NAAT+/toxin+ cases (P < .0001). NAAT+/toxin− status was protective for recurrence (adjusted odds ratio [aOR], 0.65; 95% CI, 0.55–0.77) but not CDI-related complications (aOR, 1.05; 95% CI, 0.87–1.28). Among NAAT+/toxin− cases, white blood cell count ≥15,000/µL (aOR, 1.87; 95% CI, 1.28–2.74), ≥3 unformed stools for ≥1 day (aOR, 1.90; 95% CI, 1.40–2.59), and diagnosis by a laboratory that provided no or neutral interpretive comments (aOR, 3.23; 95% CI, 2.23–4.68) were predictors of CDI treatment.
Conclusion:
Use of this 2-step algorithm likely results in underreporting of some NAAT+/toxin− cases with clinically relevant CDI. Disease severity and laboratory interpretive comments influence treatment decisions for NAAT+/toxin− cases.
Female fertility is a complex trait with age-specific changes in spontaneous dizygotic (DZ) twinning and fertility. To elucidate factors regulating female fertility and infertility, we conducted a genome-wide association study (GWAS) on mothers of spontaneous DZ twins (MoDZT) versus controls (3273 cases, 24,009 controls). This is a follow-up study to the Australia/New Zealand (ANZ) component of that previously reported (Mbarek et al., 2016), with a sample size almost twice that of the entire discovery sample meta-analysed in the previous article (and five times the ANZ contribution to that), resulting from newly available additional genotyping and representing a significant increase in power. We compare analyses with and without male controls and show unequivocally that it is better to include male controls who have been screened for recent family history, than to use only female controls. Results from the SNP based GWAS identified four genomewide significant signals, including one novel region, ZFPM1 (Zinc Finger Protein, FOG Family Member 1), on chromosome 16. Previous signals near FSHB (Follicle Stimulating Hormone beta subunit) and SMAD3 (SMAD Family Member 3) were also replicated (Mbarek et al., 2016). We also ran the GWAS with a dominance model that identified a further locus ADRB2 on chr 5. These results have been contributed to the International Twinning Genetics Consortium for inclusion in the next GWAS meta-analysis (Mbarek et al., in press).
The Upper Permian sedimentary successions in the northern Sydney Basin have been the subject of several stratigraphic, sedimentological and coal petrographic studies, and recently, extensive U-Pb zircon dating has been carried out on tuffs in the Newcastle Coal Measures. However, detailed petrographic and geochemical studies of these successions are lacking. These are important because a major change in tectonic setting occurred prior to the Late Permian because of the Hunter-Bowen Orogeny that caused the uplift of the Carboniferous and Devonian successions in the Tamworth Group and Tablelands Complex adjacent to the Sydney Basin. This should be reflected in the detrital makeup of the Upper Permian rocks. This study provides data that confirms major changes did take place at this time. Petrographic analysis indicates that the source area is composed of sedimentary, felsic volcanic and plutonic and low-grade metamorphic rocks. Conglomerate clast composition analysis confirms these results, revealing a source region that is composed of felsic volcanics, cherts, mudstones and sandstones. Geochemical analysis suggests that the sediments are geochemically mature and have undergone a moderate degree of weathering. The provenance data presented in this paper indicate that the southern New England Orogen is the principal source of detritus in the basin. Discrimination diagrams confirm that the source rocks derive from an arc-related, contractional setting and agree with the provenance analyses that indicate sediment deposition in a retroarc foreland basin. This study offers new insights on the provenance and tectonic setting of the Northern Sydney Basin, eastern Australia.
Despite their documented efficacy, substantial proportions of patients discontinue antidepressant medication (ADM) without a doctor's recommendation. The current report integrates data on patient-reported reasons into an investigation of patterns and predictors of ADM discontinuation.
Methods
Face-to-face interviews with community samples from 13 countries (n = 30 697) in the World Mental Health (WMH) Surveys included n = 1890 respondents who used ADMs within the past 12 months.
Results
10.9% of 12-month ADM users reported discontinuation-based on recommendation of the prescriber while 15.7% discontinued in the absence of prescriber recommendation. The main patient-reported reason for discontinuation was feeling better (46.6%), which was reported by a higher proportion of patients who discontinued within the first 2 weeks of treatment than later. Perceived ineffectiveness (18.5%), predisposing factors (e.g. fear of dependence) (20.0%), and enabling factors (e.g. inability to afford treatment cost) (5.0%) were much less commonly reported reasons. Discontinuation in the absence of prescriber recommendation was associated with low country income level, being employed, and having above average personal income. Age, prior history of psychotropic medication use, and being prescribed treatment from a psychiatrist rather than from a general medical practitioner, in comparison, were associated with a lower probability of this type of discontinuation. However, these predictors varied substantially depending on patient-reported reasons for discontinuation.
Conclusion
Dropping out early is not necessarily negative with almost half of individuals noting they felt better. The study underscores the diverse reasons given for dropping out and the need to evaluate how and whether dropping out influences short- or long-term functioning.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
Methamphetamine and cannabis are two widely used, and frequently co-used, substances with possibly opposing effects on the central nervous system. Evidence of neurocognitive deficits related to use is robust for methamphetamine and mixed for cannabis. Findings regarding their combined use are inconclusive. We aimed to compare neurocognitive performance in people with lifetime cannabis or methamphetamine use disorder diagnoses, or both, relative to people without substance use disorders.
Method:
423 (71.9% male, aged 44.6 ± 14.2 years) participants, stratified by presence or absence of lifetime methamphetamine (M−/M+) and/or cannabis (C−/C+) DSM-IV abuse/dependence, completed a comprehensive neuropsychological, substance use, and psychiatric assessment. Neurocognitive domain T-scores and impairment rates were examined using multiple linear and binomial regression, respectively, controlling for covariates that may impact cognition.
Results:
Globally, M+C+ performed worse than M−C− but better than M+C−. M+C+ outperformed M+C− on measures of verbal fluency, information processing speed, learning, memory, and working memory. M−C+ did not display lower performance than M−C− globally or on any domain measures, and M−C+ even performed better than M−C− on measures of learning, memory, and working memory.
Conclusions:
Our findings are consistent with prior work showing that methamphetamine use confers risk for worse neurocognitive outcomes, and that cannabis use does not appear to exacerbate and may even reduce this risk. People with a history of cannabis use disorders performed similarly to our nonsubstance using comparison group and outperformed them in some domains. These findings warrant further investigation as to whether cannabis use may ameliorate methamphetamine neurotoxicity.