We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The impact of chronic pain and opioid use on cognitive decline and mild cognitive impairment (MCI) is unclear. We investigated these associations in early older adulthood, considering different definitions of chronic pain.
Methods:
Men in the Vietnam Era Twin Study of Aging (VETSA; n = 1,042) underwent cognitive testing and medical history interviews at average ages 56, 62, and 68. Chronic pain was defined using pain intensity and interference ratings from the SF-36 over 2 or 3 waves (categorized as mild versus moderate-to-severe). Opioid use was determined by self-reported medication use. Amnestic and non-amnestic MCI were assessed using the Jak-Bondi approach. Mixed models and Cox proportional hazards models were used to assess associations of pain and opioid use with cognitive decline and risk for MCI.
Results:
Moderate-to-severe, but not mild, chronic pain intensity (β = −.10) and interference (β = −.23) were associated with greater declines in executive function. Moderate-to-severe chronic pain intensity (HR = 1.75) and interference (HR = 3.31) were associated with a higher risk of non-amnestic MCI. Opioid use was associated with a faster decline in verbal fluency (β = −.18) and a higher risk of amnestic MCI (HR = 1.99). There were no significant interactions between chronic pain and opioid use on cognitive decline or MCI risk (all p-values > .05).
Discussion:
Moderate-to-severe chronic pain intensity and interference related to executive function decline and greater risk of non-amnestic MCI; while opioid use related to verbal fluency decline and greater risk of amnestic MCI. Lowering chronic pain severity while reducing opioid exposure may help clinicians mitigate later cognitive decline and dementia risk.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
Commercial targeted sprayer systems allow producers to reduce herbicide inputs but risks the possibility of not treating emerging weeds. Currently, targeted applications with the John Deere system have five spray sensitivity settings, and no published literature discusses the effects of these settings on detecting and spraying weeds of varying species, sizes, and positions in crops. Research was conducted in Arkansas, Illinois, Indiana, Mississippi, and North Carolina on plantings of corn, cotton, and soybean to determine how various factors might influence the ability of targeted applications to treat weeds. These data included 21 weed species aggregated to six classes with height, width, and densities ranging from 25 to 0.25 cm, 25 to 0.25 cm, and 14.3 to 0.04 plants m−2, respectively. Crop and weed density did not influence the likelihood of treating the weeds. As expected, the sensitivity setting alters the ability to treat weeds. Targeted applications (across sensitivity settings, median weed height and width, and density of 2.4 plants m−2) resulted in a treatment success of 99.6% to 84.4% for Convolvulaceae, 99.1% to 68.8% for decumbent broadleaf weeds, 98.9% to 62.9% for Malvaceae, 99.1% to 70.3% for Poaceae, 98.0% to 48.3% for Amaranthaceae, and 98.5% to 55.8% for yellow nutsedge. Reducing the sensitivity setting reduced the ability to treat weeds. The size of weeds aided targeted application success, with larger weeds being more readily treated through easier detection. Based on these findings, various conditions can affect the outcome of targeted multinozzle applications. Additionally, the analyses highlight some of the parameters to consider when using these technologies.
Selective serotonin reuptake inhibitors (SSRIs) have been associated with increased risk of osteoporosis, and sertraline may be more potent than citalopram in this regard. Here, target trial emulation was used to investigate whether sertraline, citalopram and escitalopram (the S-enantiomer of citalopram) differentially affect the risk of osteoporosis. Subsequently, it was examined whether SSRIs increase the risk of osteoporosis in a dose-response-like manner.
Methods:
Danish nationwide registers were used to identify all individuals that initiated treatment for depression with sertraline, citalopram, or escitalopram between January 1, 2007, and March 1, 2019. These individuals were followed until development of osteoporosis, death, or end of follow-up. Cox proportional hazards regression was used to adjust for relevant baseline covariates to emulate randomised treatment allocation to compare the rate of osteoporosis for individuals treated with sertraline, citalopram or escitalopram. Subsequently, the cumulative dose of sertraline, citalopram, and escitalopram was calculated, and Cox proportional hazards regression was used to assess dose-response-like relationships with osteoporosis.
Results:
We identified 27,280, 65,529, and 17,703 individuals initiating treatment with sertraline, citalopram, and escitalopram, respectively. There was no material or statistically significant differential risk of osteoporosis between these groups (adjusted hazard rate ratio, aHRR = 0.98 for citalopram versus sertraline and aHRR = 0.94 for escitalopram versus sertraline). The results were not indicative of the SSRIs having a dose-response-like effect on osteoporosis risk.
Conclusions:
Sertraline, citalopram and escitalopram do not appear to differentially affect the risk of osteoporosis. The lack of clear dose-response-like relationships suggest that they do not have a causal effect on osteoporosis risk.
Partial remission after major depressive disorder (MDD) is common and a robust predictor of relapse. However, it remains unclear to which extent preventive psychological interventions reduce depressive symptomatology and relapse risk after partial remission. We aimed to identify variables predicting relapse and to determine whether, and for whom, psychological interventions are effective in preventing relapse, reducing (residual) depressive symptoms, and increasing quality of life among individuals in partial remission. This preregistered (CRD42023463468) systematic review and individual participant data meta-analysis (IPD-MA) pooled data from 16 randomized controlled trials (n = 705 partial remitters) comparing psychological interventions to control conditions, using 1- and 2-stage IPD-MA. Among partial remitters, baseline clinician-rated depressive symptoms (p = .005) and prior episodes (p = .012) predicted relapse. Psychological interventions were associated with reduced relapse risk over 12 months (hazard ratio [HR] = 0.60, 95% confidence interval [CI] 0.43–0.84), and significantly lowered posttreatment depressive symptoms (Hedges’ g = 0.29, 95% CI 0.04–0.54), with sustained effects at 60 weeks (Hedges’ g = 0.33, 95% CI 0.06–0.59), compared to nonpsychological interventions. However, interventions did not significantly improve quality of life at 60 weeks (Hedges’ g = 0.26, 95% CI -0.06 to 0.58). No moderators of relapse prevention efficacy were found. Men, older individuals, and those with higher baseline symptom severity experienced greater reductions in symptomatology at 60 weeks. Psychological interventions for individuals with partially remitted depression reduce relapse risk and residual symptomatology, with efficacy generalizing across patient characteristics and treatment types. This suggests that psychological interventions are a recommended treatment option for this patient population.
Narcolepsy is a chronic neurological disorder characterized by excessive daytime sleepiness (EDS), among other symptoms. Previous studies of narcolepsy have largely relied on quantitative methods, providing limited insight into the patient experience. This study used qualitative interviews to better understand this rare condition.
Methods
Patients with narcolepsy (types 1 [NT1] and 2 [NT2]) were recruited using convenience and snowball sampling. Trained qualitative researchers conducted hour-long, individual interviews. Interview transcripts were coded and thematically analyzed using inductive and deductive approaches.
Results
Twenty-two adults with narcolepsy (NT1=12; NT2=10) participated (average age: NT1=35; NT2=44). Most were female (NT1=83%; NT2=70%) and white (NT1=75%; NT2=60%). Average times since diagnosis were 7 years (NT1) and 11 years (NT2).
At disease onset, symptoms experienced included EDS (NT1=83%; NT2=80%)—sometimes involving sleep attacks (NT1=35%; NT2=50%)—fatigue (NT1=42%; NT2=30%), oversleeping (NT1=33%; NT2=20%), and cataplexy (NT1=42%). Participants sought a diagnosis from healthcare professionals including sleep specialists, neurologists, pulmonologists, psychiatrists, and primary care physicians. Many participants reported receiving a narcolepsy diagnosis >10 years after symptom onset (NT1=50%; NT2=60%). During that time, patients reported misdiagnoses, including depression, sleep apnea, and attention-deficit/hyperactivity disorder.
Common symptoms included EDS (NT1=100%; NT2=90%), cognitive impairment (NT1=92%; NT2=100%), and fatigue (NT1=75%; NT2=90%). All participants with NT1 reported cataplexy. Participants rated these symptoms as among the most bothersome.
Conclusions
Study results provide descriptions of narcolepsy symptoms and the often challenging journey toward seeking a diagnosis. By using patient-centered, qualitative methods, this study fills a gap by providing additional insights into the patient experience of narcolepsy.
Formulas are derived by which, given the factor loadings and the internal reliability of a test of unit length, the following estimates can be made: (1) the common-factor loadings for a similar (homogeneous) test of length n; (2) the number of times (n) that a test needs to be lengthened homogeneously to achieve a factor loading of a desired magnitude; and (3) the correlation between two tests, either or both of which have been altered in length, as a function of (a) the new factor loadings in the altered tests or (b) the original loadings in the unit-length tests. The appropriate use of the derived formulas depends upon the fulfillment of four assumptions enumerated.
Two current methods of deriving common-factor scores from tests are briefly examined and rejected. One of these estimates a score from a multiple-regression equation with as many terms as there are tests in the battery. The other limits the equation to a few tests heavily saturated with the desired factor, with or without tests used to suppress the undesired factors. In the proposed methods, the single best test for each common factor is the starting point. Such a test ordinarily has a very few undesired factors to be suppressed, frequently only one. The suppression test should be univocal, or nearly so. Fortunately, there are relatively univocal tests for factors that commonly require suppression. Equations are offered by which the desired-factor test and a single suppression test can be weighted in order to achieve one or more objectives. Among the objectives are (1) maximizing the desired factor variance, (2) minimizing the undesired factor variance, (3) a compromise, in which the undesired variance is materially reduced without loss in desired variance, and (4) a change to any selected ratio of desired to undesired variance. A more generalized solution is also suggested. The methods can be extended in part to the suppression of more than one factor. Equations are derived for the suppression of two factors.
Creating a sustainable residency research program is necessary to develop a sustainable research pipeline, as highlighted by the recent Society for Academic Emergency Medicine 2024 Consensus Conference. We sought to describe the implementation of a novel, immersive research program for first-year emergency medicine residents. We describe the curriculum development, rationale, implementation process, and lessons learned from the implementation of a year-long research curriculum for first-year residents. We further evaluated resident perception of confidence in research methodology, interest in research, and the importance of their research experience through a 32-item survey. In two cohorts, 25 first-year residents completed the program. All residents met their scholarly project requirements by the end of their first year. Two conference abstracts and one peer-reviewed publication were accepted for publication, and one is currently under review. Survey responses indicated that there was an increase in residents’ perceived confidence in research methodology, but this was limited by the small sample size. In summary, this novel resident research curriculum demonstrated a standardized, reproducible, and sustainable approach to provide residents with an immersive research program.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
Accelerating COVID-19 Treatment Interventions and Vaccines (ACTIV) was initiated by the US government to rapidly develop and test vaccines and therapeutics against COVID-19 in 2020. The ACTIV Therapeutics-Clinical Working Group selected ACTIV trial teams and clinical networks to expeditiously develop and launch master protocols based on therapeutic targets and patient populations. The suite of clinical trials was designed to collectively inform therapeutic care for COVID-19 outpatient, inpatient, and intensive care populations globally. In this report, we highlight challenges, strategies, and solutions around clinical protocol development and regulatory approval to document our experience and propose plans for future similar healthcare emergencies.
It is unclear how extracorporeal membrane oxygenation use varies across paediatric cardiac surgical programmes and how it relates to post-operative mortality. We aimed to determine hospital-level variation in post-operative extracorporeal membrane oxygenation use and its association with case-mix adjusted mortality.
Methods:
Retrospective analysis of 37 hospitals contributing to the Pediatric Cardiac Critical Care Consortium clinical registry from 1 August 2014 to 31 December 2019. Hospitalisations including cardiothoracic surgery and post-operative admission to paediatric cardiac ICUs were included. Two-level multivariable logistic regression with hospital random effect was used to determine case-mix adjusted post-operative extracorporeal membrane oxygenation use rates and in-hospital mortality. Hospitals were grouped into extracorporeal membrane oxygenation use tertiles, and mortality was compared across tertiles.
Results:
There were 43,640 eligible surgical hospitalisations; 1397 (3.2%) included at least one post-operative extracorporeal membrane oxygenation run. Case-mix adjusted extracorporeal membrane oxygenation rates varied more than sevenfold (0.9–6.9%) across hospitals, and adjusted mortality varied 10-fold (0–5.5%). Extracorporeal membrane oxygenation rates were 2.0%, 3.5%, and 5.2%, respectively, for low, middle, and high extracorporeal membrane oxygenation use tertiles (P < 0.0001), and mortality rates were 1.9%, 3.0%, and 3.1% (p < 0.0001), respectively. High extracorporeal membrane oxygenation use hospitals were more likely to initiate extracorporeal membrane oxygenation support intraoperatively (1.6% vs. 0.6% low and 1.1% middle, p < 0.0001). Extracorporeal membrane oxygenation indications were similar across hospital tertiles. When extracorporeal cardiopulmonary resuscitation was excluded, variation in extracorporeal membrane oxygenation use rates persisted (1.5%, 2.6%, 3.8%, p < 0.001).
Conclusions:
There is hospital variation in adjusted post-operative extracorporeal membrane oxygenation use after paediatric cardiac surgery and a significant association with adjusted post-operative mortality. These findings suggest that post-operative extracorporeal membrane oxygenation use could be a complementary quality metric to mortality to assess performance of cardiac surgical programmes.
Background: External comparisons of antimicrobial use (AU) may be more informative if adjusted for encounter characteristics. Optimal methods to define input variables for encounter-level risk-adjustment models of AU are not established. Methods: This retrospective analysis of electronic health record data included 50 US hospitals in 2020-2021. We used NHSN definitions for all antibacterials days of therapy (DOT), including adult and pediatric encounters with at least 1 day present in inpatient locations. We assessed 4 methods to define input variables: 1) diagnosis-related group (DRG) categories by Yu et al., 2) adjudicated Elixhauser comorbidity categories by Goodman et al., 3) all Clinical Classification Software Refined (CCSR) diagnosis and procedure categories, and 4) adjudicated CCSR categories where codes not appropriate for AU risk-adjustment were excluded by expert consensus, requiring review of 867 codes over 4 months to attain consensus. Data were split randomly, stratified by bed size as follows: 1) training dataset including two-thirds of encounters among two-thirds of hospitals; 2) internal testing set including one-third of encounters within training hospitals, and 3) external testing set including the remaining one-third of hospitals. We used a gradient-boosted machine (GBM) tree-based model and two-staged approach to first identify encounters with zero DOT, then estimate DOT among those with >0.5 probability of receiving antibiotics. Accuracy was assessed using mean absolute error (MAE) in testing datasets. Correlation plots compared model estimates and observed DOT among testing datasets. The top 20 most influential variables were defined using modeled variable importance. Results: Our datasets included 629,445 training, 314,971 internal testing, and 419,109 external testing encounters. Demographic data included 41% male, 59% non-Hispanic White, 25% non-Hispanic Black, 9% Hispanic, and 5% pediatric encounters. DRG was missing in 29% of encounters. MAE was lower in pediatrics as compared to adults, and lowest for models incorporating CCSR inputs (Figure 1). Performance in internal and external testing was similar, though Goodman/Elixhauser variable strategies were less accurate in external testing and underestimated long DOT outliers (Figure 2). Agnostic and adjudicated CCSR model estimates were highly correlated; their influential variables lists were similar (Figure 3). Conclusion: Larger numbers of CCSR diagnosis and procedure inputs improved risk-adjustment model accuracy compared with prior strategies. Variable importance and accuracy were similar for agnostic and adjudicated approaches. However, maintaining adjudications by experts would require significant time and potentially introduce personal bias. If findings are confirmed, the need for expert adjudication of input variables should be reconsidered.
Disclosure: Elizabeth Dodds Ashley: Advisor- HealthTrackRx. David J Weber: Consultant on vaccines: Pfizer; DSMB chair: GSK; Consultant on disinfection: BD, GAMA, PDI, Germitec
The authors report on ancient DNA data from two human skeletons buried within the chancel of the 1608–1616 church at the North American colonial settlement of Jamestown, Virginia. Available archaeological, osteological and documentary evidence suggest that these individuals are Sir Ferdinando Wenman and Captain William West, kinsmen of the colony's first Governor, Thomas West, Third Baron De La Warr. Genomic analyses of the skeletons identify unexpected maternal relatedness as both carried the mitochondrial haplogroup H10e. In this unusual case, aDNA prompted further historical research that led to the discovery of illegitimacy in the West family, an aspect of identity omitted, likely intentionally, from genealogical records.
Inhibitory control plays an important role in children’s cognitive and socioemotional development, including their psychopathology. It has been established that contextual factors such as socioeconomic status (SES) and parents’ psychopathology are associated with children’s inhibitory control. However, the relations between the neural correlates of inhibitory control and contextual factors have been rarely examined in longitudinal studies. In the present study, we used both event-related potential (ERP) components and time-frequency measures of inhibitory control to evaluate the neural pathways between contextual factors, including prenatal SES and maternal psychopathology, and children’s behavioral and emotional problems in a large sample of children (N = 560; 51.75% females; Mage = 7.13 years; Rangeage = 4–11 years). Results showed that theta power, which was positively predicted by prenatal SES and was negatively related to children’s externalizing problems, mediated the longitudinal and negative relation between them. ERP amplitudes and latencies did not mediate the longitudinal association between prenatal risk factors (i.e., prenatal SES and maternal psychopathology) and children’s internalizing and externalizing problems. Our findings increase our understanding of the neural pathways linking early risk factors to children’s psychopathology.
Identifying thrombus formation in Fontan circulation has been highly variable, with reports between 17 and 33%. Initially, thrombus detection was mainly done through echocardiograms. Delayed-enhancement cardiac MRI is emerging as a more effective imaging technique for thrombus identification. This study aims to determine the prevalence of occult cardiac thrombosis in patients undergoing clinically indicated cardiac MRI.
Methods:
A retrospective chart review of children and adults in the Duke University Hospital Fontan registry who underwent delayed-enhancement cardiac MRI. Individuals were excluded if they never received a delayed-enhancement cardiac MRI or had insufficient data. Demographic characteristics, native heart anatomy, cardiac MRI measurements, and thromboembolic events were collected for all patients.
Results:
In total, 119 unique individuals met inclusion criteria with a total of 171 scans. The median age at Fontan procedure was 3 (interquartile range 1, 4) years. The majority of patients had dominant systemic right ventricle. Cardiac function was relatively unchanged from the first cardiac MRI to the third cardiac MRI. While 36.4% had a thrombotic event by history, only 0.5% (1 patient) had an intracardiac thrombus detected by delayed-enhancement cardiac MRI.
Conclusions:
Despite previous echocardiographic reports of high prevalence of occult thrombosis in patients with Fontan circulation, we found very low prevalence using delayed-enhancement cardiac MRI. As more individuals are reaching adulthood after requiring early Fontan procedures in childhood, further work is needed to develop thrombus-screening protocols as a part of anticoagulation management.
People with schizophrenia on average are more socially isolated, lonelier, have more social cognitive impairment, and are less socially motivated than healthy individuals. People with bipolar disorder also have social isolation, though typically less than that seen in schizophrenia. We aimed to disentangle whether the social cognitive and social motivation impairments observed in schizophrenia are a specific feature of the clinical condition v. social isolation generally.
Methods
We compared four groups (clinically stable patients with schizophrenia or bipolar disorder, individuals drawn from the community with self-described social isolation, and a socially connected community control group) on loneliness, social cognition, and approach and avoidance social motivation.
Results
Individuals with schizophrenia (n = 72) showed intermediate levels of social isolation, loneliness, and social approach motivation between the isolated (n = 96) and connected control (n = 55) groups. However, they showed significant deficits in social cognition compared to both community groups. Individuals with bipolar disorder (n = 48) were intermediate between isolated and control groups for loneliness and social approach. They did not show deficits on social cognition tasks. Both clinical groups had higher social avoidance than both community groups
Conclusions
The results suggest that social cognitive deficits in schizophrenia, and high social avoidance motivation in both schizophrenia and bipolar disorder, are distinct features of the clinical conditions and not byproducts of social isolation. In contrast, differences between clinical and control groups on levels of loneliness and social approach motivation were congruent with the groups' degree of social isolation.
The extremely toxic protein, ricin, is derived from castor beans and is a potential terrorist weapon. Adsorption to clays might minimize the environmental persistence and toxic effects of this toxin. Ricin adsorption to clay minerals was measured using batch adsorption isotherms. Enzyme-linked immunoassay methods were used to quantify aqueous ricin concentrations. Montmorillonite, sepiolite and palygorskite effectively adsorbed ricin from aqueous solutions and yielded mostly Langmuir-type isotherms. The monolayer adsorption capacity from a Langmuir equation fit at pH 7 was 444 g ricin/kg for montmorillonite (SWy-2), but was only 5.6 g ricin/kg for kaolinite (KGa-1b). Monolayer capacities for sepiolite (SepSp-1) and palygorskite (PFl-1) at pH 7 were 59.2 and 58.1 g ricin/kg. The high-charge montmorillonite (SAz-1) effectively adsorbed ricin at pH 7, but yielded a linear isotherm with K = 5530 L/kg. At pH 5, both montmorillonites (SWy-2 and SAz-1) yielded Langmuir-type isotherms with monolayer capacities of 694 and 641 g ricin/kg. Clay samples with higher cation exchange capacities generally adsorbed more ricin, but adsorption also followed specific surface area. X-ray diffraction of <2 μm SWy-2 treated with 470 g ricin/kg indicated expansion up to 34.6 Å at buffered pHs of 4 and 7, but not at pH 10. Furthermore, ricin adsorption was greatest at pH 4 and 7, but minimal at pH 10. Treatment with 1.41 kg of purified ricin/kg clay at pH 5 yielded a 35.3 Å peak and adsorption of ~1.2 kg ricin/kg. Similar treatment with lower-purity ricin yielded less expansion and lower adsorption. The 35.3 Å peak interpreted either as a d002 or d001 reflection indicates a 70.6 Å or a 35.3 Å ricin/SWy-2 complex. This implies that adsorption and air drying have compressed interlayer ricin molecules by 18 to 65%. Effective ricin adsorption by montmorillonite suggests that it could be used to minimize the toxic effects of dispersed ricin.
US forces used Agent Orange (AO) during the Vietnam War and continued to store/test it at other locations after the war. AO is a powerful herbicide including dioxin, a highly toxic ingredient classified as a human carcinogen. The National Academies of Sciences, Engineering, and Medicine periodically review the literature on the health effects of AO exposure (AOE) and concluded in 2018 that there is sufficient evidence linking AO with a wide range of adverse health outcomes, including neurologic disorders (e.g., Parkinson’s disease). The VA has a list of medical disorders considered presumptive conditions related to AOE. More recently, AOE has been linked to a nearly double risk compared to those without AOE for receiving a dementia diagnosis. To our knowledge, no one has investigated the association of AOE to mild cognitive impairment (MCI), a condition thought to precede dementia.
Participants and Methods:
We examined men in three waves of the Vietnam Era Twin Study of Aging (VETSA). In wave 3, participants self-reported yes/no to the question of whether they ever had prolonged or serious AOE. MCI was diagnosed by the Jak-Bondi approach. Impairment was defined as 2+ tests within a cognitive domain that were more than 1.5 standard deviations below normative means after adjusting for premorbid cognitive ability. In mixed effects models, we tested the effect of AOE on MCI status. Models were adjusted for age, ethnicity, and non-independence within twin pairs.
Results:
In wave 3, 12.6% (230) of 1167 participants reported AOE. Those with AOE data had mean ages of 51.1 (wave 1), 56.0 (wave 2), and 61.4 (wave 3). Those with data on both AOE and MCI numbered 861 (wave 1), 900 (wave 2), 1121 (wave 3), and 766 had AOE and MCI at all waves. AOE was significantly related to wave 2 MCI (p < .001), but not to waves 1 and 3 MCI. AOE was significantly associated with the number of time points at which someone met criteria for MCI (p = .011). Analyses were conducted on six cognitive domains used to diagnose MCI, using available participants per wave. At all 3 waves, AOE was significantly associated with lower scores in processing speed (p = .003, p = .004, p = .005, respectively), working memory (p < .001, p = .002, p = .008) and nearly significant at all waves for executive dysfunction (p < .001, p < .001, p = .050). There were two other significant associations [wave 2 memory (p = .038), wave 3 fluency (p = .024)]. The semantic fluency cognitive domain was unrelated to AOE in all waves.
Conclusions:
AOE was consistently associated with lower processing speed, working memory, and executive dysfunction in males ages 51-61. It was also associated with the number of time points at which one met criteria for MCI in that age range, and with MCI in the mid-fifties. Findings support the idea of a risk for greater cognitive decline in those exposed to AO earlier in their lives, and with a risk for developing MCI.
Traumatic brain injury is one of several recognized risk factors for cognitive decline and neurodegenerative disease. Currently, risk scores involving modifiable risk/protective factors for dementia have not incorporated head injury history as part of their overall weighted risk calculation. We investigated the association between the LIfestyle for BRAin Health (LIBRA) risk score with odds of mild cognitive impairment (MCI) diagnosis and cognitive function in older former National Football League (NFL) players, both with and without the influence of concussion history.
Participants and Methods:
Former NFL players, ages ≥ 50 (N=1050; mean age=61.1±5.4-years), completed a general health survey including self-reported medical history and ratings of function across several domains. LIBRA factors (weighted value) included cardiovascular disease (+1.0), hypertension (+1.6), hyperlipidemia (+1.4), diabetes (+1.3), kidney disease (+1.1), cigarette use history (+1.5), obesity (+1.6), depression (+2.1), social/cognitive activity (-3.2), physical inactivity (+1.1), low/moderate alcohol use (-1.0), healthy diet (-1.7). Within Group 1 (n=761), logistic regression models assessed the association of LIBRA scores and independent contribution of concussion history with the odds of MCI diagnosis. A modified-LIBRA score incorporated concussion history at the level planned contrasts showed significant associations across concussion history groups (0, 1-2, 3-5, 6-9, 10+). The weighted value for concussion history (+1.9) within the modified-LIBRA score was based on its proportional contribution to dementia relative to other LIBRA risk factors, as proposed by the 2020 Lancet Commission Report on Dementia Prevention. Associations of the modified-LIBRA score with odds of MCI and cognitive function were assessed via logistic and linear regression, respectively, in a subset of the sample (Group 2; n=289) who also completed the Brief Test of Adult Cognition by Telephone (BTACT). Race was included as a covariate in all models.
Results:
The median LIBRA score in the Group 1 was 1.6(IQR= -1, 3.6). Standard and modified-LIBRA median scores were 1.1(IQR= -1.3, 3.3) and 2(IQR= -0.4, 4.6), respectively, within Group 2. In Group 1, LIBRA score was significantly associated with odds of MCI diagnosis (odds ratio[95% confidence interval]=1.27[1.19, 1.28], p <.001). Concussion history provided additional information beyond LIBRA scores and was independently associated with odds of MCI; specifically, odds of MCI were higher among those with 6-9 (Odds Ratio[95% confidence interval]; OR=2.54[1.21, 5.32], p<.001), and 10+ (OR=4.55;[2.21, 9.36], p<.001) concussions, compared with those with no prior concussions. Within Group 2, the modified-LIBRA score was associated with higher odds of MCI (OR=1.61[1.15, 2.25]), and incrementally improved model information (0.04 increase in Nagelkerke R2) above standard LIBRA scores in the same model. Modified-LIBRA scores were inversely associated with BTACT Executive Function (B=-0.53[0.08], p=.002) and Episodic Memory scores (B=-0.53[0.08], p=.002).
Conclusions:
Numerous modifiable risk/protective factors for dementia are reported in former professional football players, but incorporating concussion history may aid the multifactorial appraisal of cognitive decline risk and identification of areas for prevention and intervention. Integration of multi-modal biomarkers will advance this person-centered, holistic approach toward dementia reduction, detection, and intervention.