We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Preclinical evidence suggests that diazepam enhances hippocampal γ-aminobutyric acid (GABA) signalling and normalises a psychosis-relevant cortico-limbic-striatal circuit. Hippocampal network dysconnectivity, particularly from the CA1 subfield, is evident in people at clinical high-risk for psychosis (CHR-P), representing a potential treatment target. This study aimed to forward-translate this preclinical evidence.
Methods
In this randomised, double-blind, placebo-controlled study, 18 CHR-P individuals underwent resting-state functional magnetic resonance imaging twice, once following a 5 mg dose of diazepam and once following a placebo. They were compared to 20 healthy controls (HC) who did not receive diazepam/placebo. Functional connectivity (FC) between the hippocampal CA1 subfield and the nucleus accumbens (NAc), amygdala, and ventromedial prefrontal cortex (vmPFC) was calculated. Mixed-effects models investigated the effect of group (CHR-P placebo/diazepam vs. HC) and condition (CHR-P diazepam vs. placebo) on CA1-to-region FC.
Results
In the placebo condition, CHR-P individuals showed significantly lower CA1-vmPFC (Z = 3.17, PFWE = 0.002) and CA1-NAc (Z = 2.94, PFWE = 0.005) FC compared to HC. In the diazepam condition, CA1-vmPFC FC was significantly increased (Z = 4.13, PFWE = 0.008) compared to placebo in CHR-P individuals, and both CA1-vmPFC and CA1-NAc FC were normalised to HC levels. In contrast, compared to HC, CA1-amygdala FC was significantly lower contralaterally and higher ipsilaterally in CHR-P individuals in both the placebo and diazepam conditions (lower: placebo Z = 3.46, PFWE = 0.002, diazepam Z = 3.33, PFWE = 0.003; higher: placebo Z = 4.48, PFWE < 0.001, diazepam Z = 4.22, PFWE < 0.001).
Conclusions
This study demonstrates that diazepam can partially restore hippocampal CA1 dysconnectivity in CHR-P individuals, suggesting that modulation of GABAergic function might be useful in the treatment of this clinical group.
Iron deficiency has been associated with heart failure severity and mortality in children and adults. Intravenous iron therapy has been associated with improved outcomes for adults with heart failure. However, little is known about its impact and safety in children. We performed a single-centre review of all intravenous iron sucrose infusions prescribed to hospitalised patients ≤ 21 years of age with a primary cardiac diagnosis from 2020 to 2022. Ninety-one children (median age 6 years, weight 18 kg) received 339 iron sucrose infusions with a median dose of 6.5 mg/kg [5.1 mg/kg, 7.0 mg/kg]. At initial infusion, the majority (n = 63, 69%) had CHD, 70 patients (77%) were being managed by the advanced cardiac therapy team for heart failure, 13 (14%) were listed for heart transplant, 32 (35%) were on at least one vasoactive infusion, and 5 (6%) were supported with a ventricular assist device. Twenty infusions (6%) were associated with 27 possible infusion-related adverse events in 15 patients. There were no episodes of anaphylaxis or life-threatening adverse events. The most common adverse events were hypotension (n = 12), fever (n = 5), tachycardia (n = 3), and nausea/vomiting (n = 3). Eight of 20 infusion-related adverse events required intervention, and two infusions were associated with escalation in a patient’s level of care. Following intravenous iron repletion, patients’ serum iron, serum ferritin, transferrin saturation, and haemoglobin increased (p < 0.05 for all). In children hospitalised with cardiac disease, intravenous iron sucrose repletion is safe and may improve haemoglobin and iron parameters, including transferrin saturation and ferritin levels.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
Current evidence underscores a need to transform how we do clinical research, shifting from academic-driven priorities to co-led community partnership focused programs, accessible and relevant career pathway programs that expand opportunities for career development, and design of trainings and practices to develop cultural competence among research teams. Failures of equitable research translation contribute to health disparities. Drivers of this failed translation include lack of diversity in both researchers and participants, lack of alignment between research institutions and the communities they serve, and lack of attention to structural sources of inequity and drivers of mistrust for science and research. The Duke University Research Equity and Diversity Initiative (READI) is a program designed to better align clinical research programs with community health priorities through community engagement. Organized around three specific aims, READI-supported programs targeting increased workforce diversity, workforce training in community engagement and cultural competence, inclusive research engagement principles, and development of trustworthy partnerships.
Posttraumatic stress disorder (PTSD) has been associated with advanced epigenetic age cross-sectionally, but the association between these variables over time is unclear. This study conducted meta-analyses to test whether new-onset PTSD diagnosis and changes in PTSD symptom severity over time were associated with changes in two metrics of epigenetic aging over two time points.
Methods
We conducted meta-analyses of the association between change in PTSD diagnosis and symptom severity and change in epigenetic age acceleration/deceleration (age-adjusted DNA methylation age residuals as per the Horvath and GrimAge metrics) using data from 7 military and civilian cohorts participating in the Psychiatric Genomics Consortium PTSD Epigenetics Workgroup (total N = 1,367).
Results
Meta-analysis revealed that the interaction between Time 1 (T1) Horvath age residuals and new-onset PTSD over time was significantly associated with Horvath age residuals at T2 (meta β = 0.16, meta p = 0.02, p-adj = 0.03). The interaction between T1 Horvath age residuals and changes in PTSD symptom severity over time was significantly related to Horvath age residuals at T2 (meta β = 0.24, meta p = 0.05). No associations were observed for GrimAge residuals.
Conclusions
Results indicated that individuals who developed new-onset PTSD or showed increased PTSD symptom severity over time evidenced greater epigenetic age acceleration at follow-up than would be expected based on baseline age acceleration. This suggests that PTSD may accelerate biological aging over time and highlights the need for intervention studies to determine if PTSD treatment has a beneficial effect on the aging methylome.
Since cannabis was legalized in Canada in 2018, its use among older adults has increased. Although cannabis may exacerbate cognitive impairment, there are few studies on its use among older adults being evaluated for cognitive disorders.
Methods:
We analyzed data from 238 patients who attended a cognitive clinic between 2019 and 2023 and provided data on cannabis use. Health professionals collected information using a standardized case report form.
Results:
Cannabis use was reported by 23 out of 238 patients (9.7%): 12 took cannabis for recreation, 8 for medicinal purposes and 3 for both purposes. Compared to non-users, cannabis users were younger (mean ± SD 62.0 ± 7.5 vs 68.9 ± 9.5 years; p = 0.001), more likely to have a mood disorder (p < 0.05) and be current or former cigarette smokers (p < 0.05). There were no significant differences in sex, race or education. The proportion with dementia compared with pre-dementia cognitive states did not differ significantly in users compared with non-users. Cognitive test scores were similar in users compared with non-users (Montreal Cognitive Assessment: 20.4 ± 5.0 vs 20.7 ± 4.5, p = 0.81; Folstein Mini-Mental Status Exam: 24.5 ± 5.1 vs 26.0 ± 3.6, p = 0.25). The prevalence of insomnia, obstructive sleep apnea, anxiety disorders, alcohol use or psychotic disorders did not differ significantly.
Conclusion:
The prevalence of cannabis use among patients with cognitive concerns in this study was similar to the general Canadian population aged 65 and older. Further research is necessary to investigate patients’ motivations for use and explore the relationship between cannabis use and mood disorders and cognitive decline.
Partial remission after major depressive disorder (MDD) is common and a robust predictor of relapse. However, it remains unclear to which extent preventive psychological interventions reduce depressive symptomatology and relapse risk after partial remission. We aimed to identify variables predicting relapse and to determine whether, and for whom, psychological interventions are effective in preventing relapse, reducing (residual) depressive symptoms, and increasing quality of life among individuals in partial remission. This preregistered (CRD42023463468) systematic review and individual participant data meta-analysis (IPD-MA) pooled data from 16 randomized controlled trials (n = 705 partial remitters) comparing psychological interventions to control conditions, using 1- and 2-stage IPD-MA. Among partial remitters, baseline clinician-rated depressive symptoms (p = .005) and prior episodes (p = .012) predicted relapse. Psychological interventions were associated with reduced relapse risk over 12 months (hazard ratio [HR] = 0.60, 95% confidence interval [CI] 0.43–0.84), and significantly lowered posttreatment depressive symptoms (Hedges’ g = 0.29, 95% CI 0.04–0.54), with sustained effects at 60 weeks (Hedges’ g = 0.33, 95% CI 0.06–0.59), compared to nonpsychological interventions. However, interventions did not significantly improve quality of life at 60 weeks (Hedges’ g = 0.26, 95% CI -0.06 to 0.58). No moderators of relapse prevention efficacy were found. Men, older individuals, and those with higher baseline symptom severity experienced greater reductions in symptomatology at 60 weeks. Psychological interventions for individuals with partially remitted depression reduce relapse risk and residual symptomatology, with efficacy generalizing across patient characteristics and treatment types. This suggests that psychological interventions are a recommended treatment option for this patient population.
Formulas are derived by which, given the factor loadings and the internal reliability of a test of unit length, the following estimates can be made: (1) the common-factor loadings for a similar (homogeneous) test of length n; (2) the number of times (n) that a test needs to be lengthened homogeneously to achieve a factor loading of a desired magnitude; and (3) the correlation between two tests, either or both of which have been altered in length, as a function of (a) the new factor loadings in the altered tests or (b) the original loadings in the unit-length tests. The appropriate use of the derived formulas depends upon the fulfillment of four assumptions enumerated.
Two current methods of deriving common-factor scores from tests are briefly examined and rejected. One of these estimates a score from a multiple-regression equation with as many terms as there are tests in the battery. The other limits the equation to a few tests heavily saturated with the desired factor, with or without tests used to suppress the undesired factors. In the proposed methods, the single best test for each common factor is the starting point. Such a test ordinarily has a very few undesired factors to be suppressed, frequently only one. The suppression test should be univocal, or nearly so. Fortunately, there are relatively univocal tests for factors that commonly require suppression. Equations are offered by which the desired-factor test and a single suppression test can be weighted in order to achieve one or more objectives. Among the objectives are (1) maximizing the desired factor variance, (2) minimizing the undesired factor variance, (3) a compromise, in which the undesired variance is materially reduced without loss in desired variance, and (4) a change to any selected ratio of desired to undesired variance. A more generalized solution is also suggested. The methods can be extended in part to the suppression of more than one factor. Equations are derived for the suppression of two factors.
Functional impairment is a major concern among those presenting to youth mental health services and can have a profound impact on long-term outcomes. Early recognition and prevention for those at risk of functional impairment is essential to guide effective youth mental health care. Yet, identifying those at risk is challenging and impacts the appropriate allocation of indicated prevention and early intervention strategies.
Methods
We developed a prognostic model to predict a young person’s social and occupational functional impairment trajectory over 3 months. The sample included 718 young people (12–25 years) engaged in youth mental health care. A Bayesian random effects model was designed using demographic and clinical factors and model performance was evaluated on held-out test data via 5-fold cross-validation.
Results
Eight factors were identified as the optimal set for prediction: employment, education, or training status; self-harm; psychotic-like experiences; physical health comorbidity; childhood-onset syndrome; illness type; clinical stage; and circadian disturbances. The model had an acceptable area under the curve (AUC) of 0.70 (95% CI, 0.56–0.81) overall, indicating its utility for predicting functional impairment over 3 months. For those with good baseline functioning, it showed excellent performance (AUC = 0.80, 0.67–0.79) for identifying individuals at risk of deterioration.
Conclusions
We developed and validated a prognostic model for youth mental health services to predict functional impairment trajectories over a 3-month period. This model serves as a foundation for further tool development and demonstrates its potential to guide indicated prevention and early intervention for enhancing functional outcomes or preventing functional decline.
We examine whether the “privileged coordinates” of a geometric space encode its “amount of structure.” In doing so, we compare this coordinate approach to comparing amounts of structure to the more familiar automorphism approach. We first show that on a natural understanding of the former, it faces one of the same well-known problems as the latter. We then capture a precise sense in which the two approaches are closely related to one another, and we conclude by discussing whether they might still prove useful in cases of philosophical interest, despite their shortcomings.
Weeds are one of the greatest challenges to snap bean (Phaseolus vulgaris L.) production. Anecdotal observation posits certain species frequently escape the weed management system by the time of crop harvest, hereafter called residual weeds. The objectives of this work were to (1) quantify the residual weed community in snap bean grown for processing across the major growing regions in the United States and (2) investigate linkages between the density of residual weeds and their contributions to weed canopy cover. In surveys of 358 fields across the Northwest (NW), Midwest (MW), and Northeast (NE), residual weeds were observed in 95% of the fields. While a total of 109 species or species-groups were identified, one to three species dominated the residual weed community of individual fields in most cases. It was not uncommon to have >10 weeds m−2 with a weed canopy covering >5% of the field’s surface area. Some of the most abundant and problematic species or species-groups escaping control included amaranth species such as smooth pigweed (Amaranthus hybridus L.), Palmer amaranth (Amaranthus palmeri S. Watson), redroot pigweed (Amaranthus retroflexus L.), and waterhemp [Amaranthus tuberculatus (Moq.) Sauer]; common lambsquarters (Chenopodium album L.); large crabgrass [Digitaria sanguinalis (L.) Scop.]; and ivyleaf morningglory (Ipomoea hederacea Jacq.). Emerging threats include hophornbeam copperleaf (Acalypha ostryifolia Riddell) in the MW and sharppoint fluvellin [Kickxia elatine (L.) Dumort.] in the NW. Beyond crop losses due to weed interference, the weed canopy at harvest poses a risk to contaminating snap bean products with foreign material. Random forest modeling predicts the residual weed canopy is dominated by C. album, D. sanguinalis, carpetweed (Mollugo verticillata L.), I. hederacea, amaranth species, and A. ostryifolia. This is the first quantitative report on the weed community escaping control in U.S. snap bean production.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
Head and neck squamous cell carcinomas (HNSCCs) are aggressive tumours lacking a standardised timeline for treatment initiation post-diagnosis. Delays beyond 60 days are linked to poorer outcomes and higher recurrence risk.
Methods:
A retrospective review was conducted on patients over 18 with HNSCC treated with (chemo)radiation at a rural tertiary care centre (September 2020–2022). Data on patient demographics, oncologic characteristics, treatment details and delay causes were analysed using SPSS.
Results:
Out of 93 patients, 35.5% experienced treatment initiation delays (TTIs) over 60 days. Median TTI was 73 days for delayed cases, compared to 41.5 days otherwise. No significant differences in demographics or cancer characteristics were observed between groups. The primary reasons for the delay were care coordination (69.7%) and patient factors (18.2%). AJCC cancer stage showed a trend towards longer delays in advanced stages.
Conclusion:
One-third of patients faced delayed TTI, primarily due to care coordination and lack of social support. These findings highlight the need for improved multidisciplinary communication and patient support mechanisms, suggesting potential areas for quality improvement in HNSCC treatment management.
Creating a sustainable residency research program is necessary to develop a sustainable research pipeline, as highlighted by the recent Society for Academic Emergency Medicine 2024 Consensus Conference. We sought to describe the implementation of a novel, immersive research program for first-year emergency medicine residents. We describe the curriculum development, rationale, implementation process, and lessons learned from the implementation of a year-long research curriculum for first-year residents. We further evaluated resident perception of confidence in research methodology, interest in research, and the importance of their research experience through a 32-item survey. In two cohorts, 25 first-year residents completed the program. All residents met their scholarly project requirements by the end of their first year. Two conference abstracts and one peer-reviewed publication were accepted for publication, and one is currently under review. Survey responses indicated that there was an increase in residents’ perceived confidence in research methodology, but this was limited by the small sample size. In summary, this novel resident research curriculum demonstrated a standardized, reproducible, and sustainable approach to provide residents with an immersive research program.
Involuntary celibates (“incels”) are men who desire romantic or sexual partners but purportedly cannot attain them. Their ideology – the Blackpill – holds that their exclusion from successful romantic and sexual relationships is due almost entirely to their relative unattractiveness. Furthermore, the consequences of being an unattractive man bleed over into other aspects of their lives, marring their interpersonal relationships, job prospects, and overall well-being. Blaming women as the chief architects of their unhappiness, incels sometimes commit mass acts of violent retribution. In this chapter, we explicate the incel ideology; explore the interrelated phenomena of social exclusion, self-verification, and identity fusion among incels; describe who incels are; and provide a framework for de-fusing incels from the group.
The Permian–Triassic climate crisis can provide key insights into the potential impact of horizon threats to modern-day biodiversity. This crisis coincides with the same extensive environmental changes that threaten modern marine ecosystems (i.e., thermal stress, deoxygenation and ocean acidification), but the primary drivers of extinction are currently unknown. To understand which factors caused extinctions, we conducted a data analysis to quantify the relationship (anomalies, state-shifts and trends) between geochemical proxies and the fossil record at the most intensively studied locality for this event, the Meishan section, China. We found that δ18Oapatite (paleotemperature proxy) and δ114/110Cd (primary productivity proxy) best explain changes in species diversity and species composition in Meishan’s paleoequatorial setting. These findings suggest that the physiological stresses induced by ocean warming and nutrient availability played a predominant role in driving equatorial marine extinctions during the Permian–Triassic event. This research enhances our understanding of the interplay between environmental changes and extinction dynamics during a past climate crisis, presenting an outlook for extinction threats in the worst-case “Shared Socioeconomic Pathways (SSP5–8.5)” scenario.
Profiling patients on a proposed ‘immunometabolic depression’ (IMD) dimension, described as a cluster of atypical depressive symptoms related to energy regulation and immunometabolic dysregulations, may optimise personalised treatment.
Aims
To test the hypothesis that baseline IMD features predict poorer treatment outcomes with antidepressants.
Method
Data on 2551 individuals with depression across the iSPOT-D (n = 967), CO-MED (n = 665), GENDEP (n = 773) and EMBARC (n = 146) clinical trials were used. Predictors included baseline severity of atypical energy-related symptoms (AES), body mass index (BMI) and C-reactive protein levels (CRP, three trials only) separately and aggregated into an IMD index. Mixed models on the primary outcome (change in depressive symptom severity) and logistic regressions on secondary outcomes (response and remission) were conducted for the individual trial data-sets and pooled using random-effects meta-analyses.
Results
Although AES severity and BMI did not predict changes in depressive symptom severity, higher baseline CRP predicted smaller reductions in depressive symptoms (n = 376, βpooled = 0.06, P = 0.049, 95% CI 0.0001–0.12, I2 = 3.61%); this was also found for an IMD index combining these features (n = 372, βpooled = 0.12, s.e. = 0.12, P = 0.031, 95% CI 0.01–0.22, I2= 23.91%), with a higher – but still small – effect size compared with CRP. Confining analyses to selective serotonin reuptake inhibitor users indicated larger effects of CRP (βpooled = 0.16) and the IMD index (βpooled = 0.20). Baseline IMD features, both separately and combined, did not predict response or remission.
Conclusions
Depressive symptoms of people with more IMD features improved less when treated with antidepressants. However, clinical relevance is limited owing to small effect sizes in inconsistent associations. Whether these patients would benefit more from treatments targeting immunometabolic pathways remains to be investigated.
It has been posited that alcohol use may confound the association between greater concussion history and poorer neurobehavioral functioning. However, while greater alcohol use is positively correlated with neurobehavioral difficulties, the association between alcohol use and concussion history is not well understood. Therefore, this study investigated the cross-sectional and longitudinal associations between cumulative concussion history, years of contact sport participation, and health-related/psychological factors with alcohol use in former professional football players across multiple decades.
Participants and Methods:
Former professional American football players completed general health questionnaires in 2001 and 2019, including demographic information, football history, concussion/medical history, and health-related/psychological functioning. Alcohol use frequency and amount was reported for three timepoints: during professional career (collected retrospectively in 2001), 2001, and 2019. During professional career and 2001 alcohol use frequency included none, 1-2, 3-4, 5-7 days/week, while amount included none, 12, 3-5, 6-7, 8+ drinks/occasion. For 2019, frequency included never, monthly or less, 2-4 times/month, 2-3 times/week, >4 times/week, while amount included none, 1-2, 3-4, 5-6, 7-9, 10+ drinks/occasion. Scores on a screening measure for Alcohol Use Disorder (CAGE) were also available at during professional career and 2001 timepoints. Concussion history was recorded in 2001 and binned into five groups: 0, 1-2, 3-5, 6-9, 10+. Depression and pain interference were assessed via PROMIS measures at all timepoints. Sleep disturbance was assessed in 2001 via separate instrument and with PROMIS Sleep Disturbance in 2019. Spearman’s rho correlations tested associations between concussion history and years of sport participation with alcohol use across timepoints, and whether poor health functioning (depression, pain interference, sleep disturbance) in 2001 and 2019 were associated with alcohol use both within and between timepoints.
Results:
Among the 351 participants (Mage=47.86[SD=10.18] in 2001), there were no significant associations between concussion history or years of contact sport participation with CAGE scores or alcohol use frequency/amount during professional career, 2001, or 2019 (rhos=-.072-.067, ps>.05). In 2001, greater depressive symptomology and sleep disturbance were related to higher CAGE scores (rho=.209, p<.001; rho=.176, p<.001, respectively), while greater depressive symptomology, pain interference, and sleep disturbance were related to higher alcohol use frequency (rho=.176, p=.002; rho=.109, p=.045; rho=.132, p=.013, respectively) and amount/occasion (rho=.215, p<.001; rho=.127, p=.020; rho=.153, p=.004, respectively). In 2019, depressive symptomology, pain interference, and sleep disturbance were not related to alcohol use (rhos=-.047-.087, ps>.05). Between timepoints, more sleep disturbance in 2001 was associated with higher alcohol amount/occasion in 2019 (rho=.115, p=.036).
Conclusions:
Increased alcohol intake has been theorized to be a consequence of greater concussion history, and as such, thought to confound associations between concussion history and neurobehavioral function later in life. Our findings indicate concussion history and years of contact sport participation were not significantly associated with alcohol use cross-sectionally or longitudinally, regardless of alcohol use characterization. While higher levels of depression, pain interference, and sleep disturbance in 2001 were related to greater alcohol use in 2001, they were not associated cross-sectionally in 2019. Results support the need to concurrently address health-related and psychological factors in the implementation of alcohol use interventions for former NFL players, particularly earlier in the sport discontinuation timeline.