To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Long-term exposure to the psychoactive ingredient in cannabis, delta-9-tetrahydrocanabinol (THC), has been consistently raised as a notable risk factor for schizophrenia. Additionally, cannabis is frequently used as a coping mechanism for individuals diagnosed with schizophrenia. Cannabis use in schizophrenia has been associated with greater severity of psychotic symptoms, non-compliance with medication, and increased relapse rates. Neuropsychological changes have also been implicated in long-term cannabis use and the course of illness of schizophrenia. However, the impact of co-occurring cannabis use in individuals with schizophrenia on cognitive functioning is less thoroughly explored. The purpose of this meta-analysis was to examine whether neuropsychological test performance and symptoms in schizophrenia differ as a function of THC use status. A second aim of this study was to examine whether symptom severity moderates the relationship between THC use and cognitive test performance among people with schizophrenia.
Participants and Methods:
Peer-reviewed articles comparing schizophrenia with and without cannabis use disorder (SZ SUD+; SZ SUD-) were selected from three scholarly databases; Ovid, Google Scholar, and PubMed. The following search terms were applied to yield studies for inclusion: neuropsychology, cognition, cognitive, THC, cannabis, marijuana, and schizophrenia. 11 articles containing data on psychotic symptoms and neurocognition, with SZ SUD+ and SZ SUD- groups, were included in the final analyses. Six domains of neurocognition were identified across included articles (Processing Speed, Attention, Working Memory, Verbal Learning Memory, and Reasoning and Problem Solving). Positive and negative symptom data was derived from eligible studies consisting of the Positive and Negative Syndrome Scale (PANSS), the Scale for the Assessment of Positive Symptoms (SAPS), the Scale for the Assessment of Negative Symptoms (SANS), Self-Evaluation of Negative Symptoms (SNS), Brief Psychiatric Rating Scale (BPRS), and Structured Clinical Interview for DSM Disorders (SCID) scores. Meta analysis and meta-regression was conducted using R.
Results:
No statistically significant differences were observed between SZ SUD+ and SZ SUD-across the cognitive domains of Processing Speed, Attention, Working Memory, Verbal Learning Memory, and Reasoning and Problem Solving. Positive symptom severity was found to moderate the relationship between THC use and processing speed, but not negative symptoms. Positive and negative symptom severity did not significantly moderate the relationship between THC use and the other cognitive domains.
Conclusions:
Positive symptoms moderated the relationship between cannabis use and processing speed among people with schizophrenia. The reasons for this are unclear, and require further exploration. Additional investigation is warranted to better understand the impact of THC use on other tests of neuropsychological performance and symptoms in schizophrenia.
Mild traumatic brain injury (mTBI) remains one of the most prevalent brain injuries, affecting approximately one-in-sixty Americans. Previous studies have shown an association between white matter integrity and aggression at chronic stages (either 6-months or 12 months post-mTBI) however, the association between white matter axonal damage, neuropsychological outcomes, and elevated aggression in multiple stages since time-since-injury (TSI) is unclear. We hypothesized that functional connectivity between the default mode network (DMN), a key brain network involved in cognitive, self-reflective, and emotional processes, and other cortical regions would predict elevated aggression and emotional disturbances across multiples stages of recovery in mild TBI.
Participants and Methods:
Participants healthy controls (HC: n=35 total [15 male, 20 female], age M=24.40, SD=5.95; mTBI: n=121 total [43 male; 78 female], age M = 24.76, SD=7.48). Participants completed a cross-sectional study design at specific post-injury time points ranging from (2W, 1M,3M,6M,12M). Participants completed a comprehensive neuropsychological battery and a neuroimaging session, including resting state functional connectivity (FC). Here, we focus on the FC outcomes for the DMN. During the neuropsychological assessment, participants completed tests that measured learning and memory, speed of information processing, executive function, and attention. To predict neuropsychological performance from brain connectivity, we conducted a series of stepwise linear regression analyses with the 11 functional brain connections (extracted as Fisher’s z-transformed correlations between regions) as predictors and each of the 13 neurocognitive factor scores separately.
Results:
Consistent with our hypothesis, one predictor materialized as significant (R = .187, R2 = .035, F = 5.55, p = .020) for the Total Sample. Largely, positive connectivity between Right Inferior Frontal Gyrus and the PCC (seed) was associated with increased aggression in the Total Sample of all participants (ß = .187, t = 2.36, p = .020). One predictor materialized as significant in Individuals the 2W group, (R = .719, R2 = .518, F = 8.58, p = .019). In general, greater negative (anticorrelated) connectivity between the Left Lateral Occipital Cortex (ß = -.719, t = -2.93, p = .019) and the PCC (seed) and was associated with greater aggression at 2W, but no predictors emerged at 1M or 3M. Individuals in the 6M group showed one significant predictor (R = .675, R2 = .455, F = 16.71, p = .001). Specifically, greater positive connectivity between the Right Lateral Occipital Cortex (/? = .675, t = 4.09, p = .001) and PCC (seed) was associated with greater aggression at 6M. No associations were evident at 12M.
Conclusions:
Overall, these findings suggest functional connectivity between the posterior hub of the DMN and cortical regions within the occipital cortex was predictive of higher aggression in individuals with mTBI. However, the direction of this connectivity differed at 2W versus 6M, suggesting a complex process of recovery that may contribute differentially to aggression in patients with mTBI. As these regions are involved in self-consciousness and visual perception, this may point toward future avenues for aiding in functional recovery of emotional dysregulation in patients with persistent post-concussion syndrome.
Parkinson’s disease (PD) is associated with metabolic disorders such as insulin resistance. Pharmacological intervention used to treat insulin resistance, like GLP-1 agonists, may have auspicious results in the treatment for PD. The objective of this clinical trial was to assess the therapeutic effect of liraglutide on non-motor symptoms, such as, but not limited to, cognitive function and emotional well-being, and quality of life for individuals with PD.
Participants and Methods:
In a single-center, randomized, double-blind, placebo-controlled trial, PD patients self-administered liraglutide injections once-daily (1.2 or 1.8 mg, as tolerated) or placebo in a 2:1 study design for 52 weeks after titration. Primary outcomes included adjusted difference in the OFF-state Movement Disorders Society Unified PD Rating Scale (MDS-UPDRS) part III, non-motor symptom scale (NMSS) and Mattis Dementia Rating Scale (MDRS-2). Secondary outcomes included quality of life scores (Parkinson Disease Questionnaire, PDQ-39) and other neuropsychological tests, including Delis-Kaplan Executive Function System (DKEFS), Geriatric Depression Scale (GDS), and Parkinson’s Anxiety Scale (PAS) scores.
Results:
Sixty-three subjects were enrolled and randomized to liraglutide (n=42) or placebo (n=21). Mean age in years was 63.5 (9.8) and 64.2 (6.4) for liraglutide and placebo cohorts, respectively (p=0.78), and mean age at symptom onset was 58.9 (10.5) and 59.3 (7.5) for liraglutide and placebo cohorts, respectively (p=0.86). At 54 weeks, NMSS scores had improved by 6.6 points in the liraglutide group and worsened by 6.5 points in the placebo group, a 13.1 point adjusted mean difference (p<0.05). Further analysis showed all nine NMSS sub-domain changes favoring the liraglutide group, with one (attention/memory) reaching statistical significance (p<0.05). Secondary outcome analyses revealed a significant improvement of PDQ-39 (p<0.001) and Parkinson’s Anxiety Scale - Avoidance Behavior scores (p<0.05) in the treatment group. MDRS-2 sub-scores did not further differentiate study groups, while DKEFS letter fluency scores favored placebo group (p<0.05).
Conclusions:
Treatment with liraglutide improved self-reported non-motor symptoms of PD, activities of daily living, and quality of life. These results validate similar outcomes reported with other GLP-1 agonists implicating consideration for novel treatment opportunities for individuals with PD. Notably, the absence of significant performance-based cognitive changes over the duration of the trial for the participants in this study has several plausible explanations given participant-related baseline demographic and clinical factors. Implications for neuropsychologists will be discussed.
Many epilepsy syndromes are medically refractory, leading patients to be referred for surgical work-up to control their seizures and improve their quality of life (QOL). Although surgical treatments may reduce or stop seizures, many patients continue to present with declines in mood and/or cognition post-operatively. In addition, pre-operative QOL of patients with medically refractory epilepsy is impacted by executive function (EF). The present study aims to investigate the relationship between post-operative mood/QOL and pre-operative EF in adults with epilepsy. It was hypothesized that mood would remain stable or decline post-operatively; pre-operative EF would be a protective factor for mood decline and QOL.
Participants and Methods:
The sample consisted of 47 adult patients (57.4% female; Age, M= 34.02(11.59)) with medically refractory epilepsy at the UCSF Epilepsy Center. Participants were included if they received surgical treatment for their epilepsy (42.6% right anterior temporal lobectomy [ATL], 46.8% left ATL, 2.1% laser ablation, 6.4% responsive neurostimulation, 2.1% multiple surgical interventions) and received both a pre- and post-surgical neuropsychological evaluation. Most patients were right-handed (95.7% right). Mood and QOL were assessed from pre- and post-operative evaluations using the Beck Depression Inventory- Second Edition (BDI-II), Beck Anxiety Inventory (BAI), and Quality of Life in Epilepsy- 31 (QOLIE-31). Executive function was assessed using the Trail Making Test, and the Delis-Kaplan Executive Function Scale (D-KEFS) subtests Color-Word Interference (CW-I) and Verbal Fluency. Descriptive statistics were obtained for each of the measures listed. A paired sample t-test was conducted between time A and B to determine whether mood and QOL were significantly different. Two multiple regressions were conducted. One analysis for post-operative depression and QOL respectively with pre-operative EF.
Results:
At time A, both anxiety and depression were minimal (BDI M= 17.8, SD= 10.34; BAI M= 13; SD= 8.94). QOL was borderline clinically significant (QOLIE M= 37.46, SD= 9.74). Depression at time B was positively correlated with depression at time A (r[45]= 0.316, p=0.035). A paired sample t-test indicated that depression and QOL were significantly different at time A and time B (t[44]= 2.04, p= 0.047; t[31]= -3.34, p= 0.002), with improved scores post-operatively. Anxiety was not significantly different across time points (t[39]= 1.20, p=0.238). Multiple regression analyses indicated that pre-operative depression and EF did not predict post-operative depression (F(5,27)= 1.62, p= 0.189). Pre-operative EF (CW-I Inhibition-Switching), but not pre-operative depression, predicted post-operative QOL (F(4(24)= 3.13, p=.03, R2= .343).
Conclusions:
Results were somewhat discrepant from prior research in that depression and QOL improved post-surgically. Notably, while the observed change in depression was statistically significant it was not clinically significant according to literature (Doherty et al., 2021). Pre-surgical inhibitory control predicted QOL, illustrating that EF may serve as a protective factor post-surgically. The present study did not include a measure of seizure freedom classification post-operatively, therefore, future studies should investigate how seizure freedom classification impacts the relationship between mood, QOL, and cognitive outcomes.
The goal of the current study is to compare QoL between tumor grade levels (i.e., low vs high) as well as the relationship between QoL, cognition, and tumor grade.
Participants and Methods:
Participants were 156 individuals diagnosed with a brain tumor who completed neuropsychological evaluation within an interdisciplinary brain tumor clinic (mean age=51.67; SD=15.0; mean education=13.98; SD=2.6; 59% male). Independent samples T-Test was utilized to review participants’ reported overall quality of life (QoL) on the FACT-Br in relation to tumor grade level (i.e., high vs low). Linear regression analysis was utilized to determine which cognitive variable may be most predictive of QoL.
Results:
Results of the Independent T-test demonstrated that low and high tumor grade level groups did not significantly differ in total or individual sub-domain QoL. With regard to the regression analysis, cognitive variables as measured by TMT B, HVLT delayed recall, and FAS accounted for significant variance in quality of life in both low grade and high grade tumor groups (low tumor grade level effect size R2 =0.21; high tumor grade level effect size R2 = 0.19). However, TMT B emerged as a significant predictor of QoL in only the low grade group, while cognitive performance within these same tasks did not significantly predict QoL for the high tumor grade level group.
Conclusions:
Our findings did not significantly differ in the overall impact tumor grade level (i.e., low vs., high) has on QoL. Notably, cognitive performance on TMT B significantly predicts QoL for the low but not high tumor grade level group.
Previous research suggests that individuals with isolated Agenesis of the Corpus Callosum (AgCC) have cognitive and psychosocial deficits including that of complex processing of emotions (Anderson et al., 2017) and their ability to verbally express emotional experiences (Paul et al., 2021). Additionally, research suggests individuals with AgCC show impaired recognition of the emotions of others (Symington et al., 2010), as well as diminished ability to infer and describe the emotions of others (Renteria-Vazquez et al., 2022; Turk et al., 2010). However, the nature of the empathic abilities of individuals with AgCC remains unclear in empirical research. Capacity for empathetic feelings and situational recognition in persons with AgCC were tested using the Multifaceted Empathy Test [MET] (Foell et al., 2018). We hypothesized that individuals with AgCC would have lower abilities for both cognitive and affective empathy than neurotypical controls.
Participants and Methods:
Results from 50 neurotypical control participants recruited from MTurk Cloud were compared to responses from 19 AgCC participants with normal-range FSIQ (>80) drawn from the individuals with AgCC involved with the Human Brain and Cognition Lab at the Travis Research Institute. The research was completed through an online version of the MET. The MET uses a series of photographs of individuals displaying an emotion. To measure cognitive empathy, the participants are asked to pick the correct emotion being displayed with three distractors for each item. To measure affective empathy, they are then asked on a sliding scale, “how much do you empathize with the person shown” (1 = Not at all, 7 = Very much).
Results:
Results of a MANOVA showed a trend for a significant overall difference between individuals with AgCC and controls for empathic abilities F(1, 67) = 2.59, p-value = .082, with persons with AgCC showing less empathy overall. Follow-up one-way ANOVAs showed that individuals with AgCC scored significantly lower in cognitive empathy F(1, 67) = 4.63, p-value = .035, ηp2 = .065; however, affective empathy was not significantly different between groups F(1, 67) = .537, p-value = .466, ηp2 = .008.
Conclusions:
Results suggest that adults with AgCC have a diminished ability to give cognitive labels to the emotional states of others compared to neurotypical controls. However, contrary to our hypothesis, participants with AgCC had affective responses to the pictures of the emotional states of others which were similar to neurotypical controls. Recent research has shown that individuals with AgCC have difficulty inferring and elaborating on the more complex cognitive, social, and emotional aspects of simple animations (Renteria-Vazquez et al., 2022; Turk et al., 2010). Cognitive empathy would require this form of elaborative thinking, even when affective empathy is normal. Similarly, Paul et al. (2021) described alexithymia in persons with AgCC as difficulty in expressing emotions linguistically, but found similar endorsements of emotional experience when compared to neurotypical controls. This study provides further evidence to suggest the corpus callosum facilitates the ability to cognitively label emotions but not necessarily the ability to experience emotions affectively.
Research examining co-occurring anxiety and depression in persons with multiple sclerosis (PwMS) is scarce, though an estimated 20% of PwMS experience clinically significant anxiety and depression (Gascoyne et al., 2019). Recent work by Hanna & Strober (2020) found that PwMS with comorbid anxiety and depression reported worse outcomes in all constructs of symptomatology, disease management, psychological well-being, and quality of life. However, it is unclear how co-occurring anxiety and depression symptoms may influence or exacerbate cognitive difficulties in PwMS. Further, considering there are high levels of comorbidity between depression, anxiety, and fatigue in PwMS, this study aims to examine the unique variances of depression, anxiety, co-occurring depression and anxiety, and fatigue on cognitive functioning.
Participants and Methods:
86 PwMS (F=65,M=21) completed a comprehensive neuropsychological battery that included self-report measures of anxiety, depression, and fatigue. An intraindividual variability (IIV) composite score was calculated by combining standardized intraindividual standard deviation and maximum discrepancy scores on measures of attention/processing speed and memory for each participant. Lower scores indicate worse performance (i.e., greater variability). A hierarchical regression was conducted with IIV as the outcome variable and with depression, anxiety, cognitive fatigue, physical fatigue, and the interaction between depression and anxiety as predictors. Expanded Disability Status Scale (EDSS) scores were included as a covariate.
Results:
The only model that included a statistically significant predictor of IIV was the final model, which included EDSS, depression, anxiety, cognitive fatigue, physical fatigue, and the interaction between depression and anxiety, F(6,77)=2.97, p=.01, AR2=.08. While the main effects of depression and anxiety were not significant, the interaction between depression and anxiety was significant, F(6,77)=7.20, p=.01, n2=.09. Simple effects tests revealed that the relationship between IIV and anxiety was marginally significant for those at the cutoff for clinical depression (square root BDI-FS=2; BDI-FS=4), F(6,77)=3.52, p=.07, n2=.04. However, the effect of anxiety on IIV increased as depression increased. For example, in those with high levels of depression (1.5 SD above the mean), there was a significant relationship between anxiety and IIV, F(6,77)=4.16, p=.04, n2=.05, though this was not the case for those with low levels of depression (1.5 SD below the mean), F(6,77)=0.01, p=.92, n2=.00.
Conclusions:
The interaction between depression and anxiety predicted variability in performance such that those with high levels of depression and anxiety demonstrated significantly greater IIV. Since dispersion is considered a marker for neurocognitive integrity, this may suggest that co-occurring psychological disturbances are associated with poorer cognitive integrity, an important consideration for interventions and outcomes. While interventions aimed at treating co-occurring depression and anxiety have been largely overlooked within the MS literature (Butler et al., 2016), transdiagnostic interventions have been beneficial for general adult populations with co-occurring anxiety and depression (McEvoy et al., 2009). Future work should examine the efficacy of interventions aimed at addressing co-occurring depression and anxiety in PwMS, as this may help to improve cognitive functioning, as well as perception of functioning, which will likely further improve quality of life and overall well-being.
Craft Story 21 is a practical, comprehensive, and freely available tool to assess logical memory in patients with memory impairment. Currently, the test does not have normative values in Spanish that adjust to our specific population. Furthermore, the original test does not have a recognition phase to increase the specificity of the memory profile by allowing a distinction between different amnesic profiles. Therefore, this study has two main aims: 1) the generation of normative data for the Craft Story 21 memory test, adjusting to the characteristics of our Spanish-speaking country according to sex, age, and educational level; and 2) the design and validation of the recognition phase of the test and the assessment of its psychometric properties.
Participants and Methods:
The baremization sample comprised 81 healthy participants aged 41 to 91, assessed through the Uniform Data Set III (UDS III) battery of the National Alzheimer’s Coordinating Center (NACC). The design of the recognition phase included three steps: (1) construction of the scale and review by experts, (2) pilot study, and (3) analysis of its psychometric properties. In the latter, 190 participants were recruited and classified into two groups matched by age, sex, and educational level: Mild Cognitive Impairment (MCI n=96) according to Petersen’s (1999) criteria and healthy controls (HC n=94). In addition, the diagnostic accuracy of the test was studied by the ROC curve method, its concurrent validity by correlation with other memory tests (RAVLT), and its internal consistency with Cronbach’s alpha test.
Results:
The Baremization sample was divided into 16 groups: 4 age groups (41-51, 51-61, 6171 and >72 years), two educational levels (6-12 years and >12 years), and sex (male and female). Performance was significantly different between age groups (p < 0.003**). No significant differences were found in Craft Story 21 performance between education (p > 0.09) or sex (p > 0.56) groups within the same age group. Normative values in terms of means and standard deviations are presented for each group. Regarding the design of the recognition phase, the groups did not show significant differences in age (p= 0.13), sex (p= 0.88), or schooling (p= 0.33). The overall score of Craft Story 21 test showed the ability to discriminate between healthy controls from patients with MCI (sensitivity = 81.6% and specificity = 72.4%). Its diagnostic accuracy by phase (immediate AUC= 0.86; delayed AUC= 0.86 and recognition AUC= 0.75) was superior than Rey Auditory and Verbal Learning Test (RAVLT): immediate (AUC= 0.79), delayed (AUC= 0.82) and recognition (AUC= 0.74). It presented evidence of concurrent validity with RAVLT in its immediate (r=0.56, p<0.001), delayed (r= 0.66, p<0.001) and recognition (r= 0.37, p<0.001) trails. The instrument also presented evidence of reliability (a= 0.82).
Conclusions:
The Craft Story 21 test is a practical, brief and multicultural scale. Thus having appropriate scales for the specific population to be assessed to a more accurate and precise description of the memory profile. Additionally, the new Recognition phase of the test showed evidence of validity and reliability for assessing memory processes.
The vascular depression hypothesis posits that there is a relationship between vascular disease and geriatric depressive symptoms. Black Americans are at higher risk for cardiovascular disease (CVD) than their White counterparts. However, it is not fully understood whether risk for CVD or potentially related neurovascular changes have a differential relationship in Black and White Americans. We investigated differences in the relationships between white matter hyperintensities, risk for CVD, and depressive symptoms in Black and White older adults.
Participants and Methods:
Participants were derived from the National Alzheimer Coordinating Center database. Black (N = 120) and White (N = 120) participants were matched on age, sex, and education. White matter hyperintensity (WMH) and CVD burden data (sum of vascular conditions) on 320 individuals were analyzed (mean age = 75.9; 69.4% female). Age, sex, race, and education were included as covariates in separate regression models in which WMH and CVD burden predicted scores on the 15-item Geriatric Depression Scale (GDS-15). Follow-up stratified analyses were conducted to explore the relationship between WMH and CVD burden on GDS scores in the Black and White samples.
Results:
Lower WMH volume and higher CVD burden were associated with higher GDS scores in the total sample. Analyses stratified by race showed a positive effect of CVD burden on GDS scores only for the Black sample and a trend effect of WMH on GDS scores only for the White sample, with higher WMH volume associated with lower rather than higher GDS scores.
Conclusions:
These findings are consistent with previous research showing that WMH and CVD burden are related to depression in older adults. Contrary to expectation, WMH had a negative trend association with GDS scores in the White sample. Findings also suggest that different etiologies may play a role in the clinical presentation of depression in Black and White Americans. Additional research is needed to further explore the relationships among CVD, its neural correlates, and depressive symptoms in diverse samples.
Injection drug use is a significant public health crisis with adverse health outcomes, including increased risk of human immunodeficiency virus (HIV) infection. Comorbidity of HIV and injection drug use is highly prevalent in the United States and disproportionately elevated in surrounding territories such as Puerto Rico. While both HIV status and injection drug use are independently known to be associated with cognitive deficits, the interaction of these effects remains largely unknown. The aim of this study was to determine how HIV status and injection drug use are related to cognitive functioning in a group of Puerto Rican participants. Additionally, we investigated the degree to which type and frequency of substance use predict cognitive abilities.
Participants and Methods:
96 Puerto Rican adults completed the Neuropsi Attention and Memory-3rd Edition battery for Spanish-speaking participants. Injection substance use over the previous 12 months was also obtained via clinical interview. Participants were categorized into four groups based on HIV status and injection substance use in the last 30 days (HIV+/injector, HIV+/non-injector, HIV/injector, HIV-/non-injector). One-way analysis of variance (ANOVA) was conducted to determine differences between groups on each index of the Neuropsi battery (Attention and Executive Function; Memory; Attention and Memory). Multiple linear regression was used to determine whether type and frequency of substance use predicted performance on these indices while considering HIV status.
Results:
The one-way ANOVAs revealed significant differences (p’s < 0.01) between the healthy control group and all other groups across all indices. No significant differences were observed between the other groups. Injection drug use, regardless of the substance, was associated with lower combined attention and memory performance compared to those who inject less than monthly (Monthly: p = 0.04; 2-3x daily: p < 0.01; 4-7x daily: p = 0.02; 8+ times daily: p < 0.01). Both minimal and heavy daily use predicted poorer memory performance (p = 0.02 and p = 0.01, respectively). Heavy heroin use predicted poorer attention and executive functioning (p = 0.04). Heroin use also predicted lower performance on tests of memory when used monthly (p = 0.049), and daily or almost daily (2-6x weekly: p = 0.04; 4-7x daily: p = 0.04). Finally, moderate injection of heroin predicted lower scores on attention and memory (Weekly: p = 0.04; 2-6x weekly: p = 0.048). Heavy combined heroin and cocaine use predicted worse memory performance (p = 0.03) and combined attention and memory (p = 0.046). HIV status was not a moderating factor in any circumstance.
Conclusions:
As predicted, residents of Puerto Rico who do not inject substances and are HIVnegative performed better in domains of memory, attention, and executive function than those living with HIV and/or inject substances. There was no significant difference among the affected groups in cognitive ability. As expected, daily injection of substances predicted worse performance on tasks of memory. Heavy heroin use predicted worse performance on executive function and memory tasks, while heroin-only and combined heroin and cocaine use predicted worse memory performance. Overall, the type and frequency of substance is more predictive of cognitive functioning than HIV status.
There is growing evidence to indicate that blast exposure military personnel experience throughout their career can have a negative impact on their brain health. The majority of research in the area of blast related neurotrauma has been focused on traumatic brain injury (TBI); however, the blast exposure may often be independent of TBI. It is common in both active duty military and veterans to report years of blast exposure from combat and training. The objective of this study was to explore the relationship between blast exposure and cognitive functioning in military personnel seeking treatment for a mild TBI.
Participants and Methods:
Participants were recruited from a military hospital while enrolled in a multidisciplinary treatment program for TBI. All patients had at least one diagnosed mTBI as well as persistent cognitive complaints. Exclusion criteria included invalid performance on a performance validity test and a symptom validity test. 97 participants were included in the analysis with an average age of 34.0 (SD = 7.9) and average 4.0 combat deployments (SD = 3.6). Blast exposure history was measured by the overall score from the Blast Exposure Threshold Survey (BETS) which assessed the frequency and duration of use of various blast sources. Outcomes included the Neurobehavioral Symptom Inventory (NSI) and the Global Deficit Scale (GDS) an objective measure of cognitive deficiency. GDS was calculated from seven measures: Hopkins Verbal Learning Test-Revised Total and Delayed Recall (HVLT-TR and HVLT-DR); DKEFS System Color-Word Condition 3 Inhibition (CW3), Color-Word Condition 4 Switching (CW4) and Trail Making Condition 3 Letter Sequencing (TM3), Paced Auditory Serial Addition Test (PASAT), and the Symbol Digit Modality Test (SDMT). Demographically corrected t-scores (M=50, SD = 10) were converted to deficit scores and averaged to calculate GDS. To adjust for nonnormal distributions, non-parametric statistics were examined.
Results:
The BETS was not related to GDS (rho = -.055); however, there was a significant correlation between higher levels on the BETS and better performance on measures of selective attention (PASAT rho = .307) and processing speed (SDMT rho = .218). The correlation between BETS and the other neuropsychological measures were not meaningful (all rho’s <.10). Those with an impaired GDS, did not differ from others on the BETS. BETS was also not associated with neurobehavioral symptoms (rho = .125). BETS had moderate correlations with number of combat deployments (rho =.483), severity of combat exposure (rho =.556). It was not related to education (rho = .004) or pre-morbid intelligence (rho =-.029).
Conclusions:
The BETS was not related to GDS (rho = -.055); however, there was a significant correlation between higher levels on the BETS and better performance on measures of selective attention (PASAT rho = .307) and processing speed (SDMT rho = .218). The correlation between BETS and the other neuropsychological measures were not meaningful (all rho’s <.10). Those with an impaired GDS, did not differ from others on the BETS. BETS was also not associated with neurobehavioral symptoms (rho = .125). BETS had moderate correlations with number of combat deployments (rho =.483), severity of combat exposure (rho =.556). It was not related to education (rho = .004) or pre-morbid intelligence (rho =-.029).
Post-stroke depression (PSD) and anxiety disorders are the most common psychiatric issues that occur after cerebrovascular accident (CVA), with prevalence rates of up to 50%. Less studied, post-stroke apathy and pseudobulbar affect (PBA) also occur in a subset of individuals after CVA leading to reduced quality of life. Cognitive impairments also persist, especially memory, language, and executive difficulties. Residual cognitive and emotional sequelae after CVA limit return-to-work with between 20-60% becoming disabled or retiring early. This study examined the frequency and relative contribution of cognitive, behavioral and emotional factors for not returning-to-work after CVA.
Participants and Methods:
Participants included 242 stroke survivors (54% women, average age of 59.2 years) who underwent an outpatient neuropsychological evaluation approximately 13 months after unilateral focal CVA. Exclusion criteria were a diagnosis of dementia, comprehension issues identified during assessment, multifocal or bilateral CVA, and inpatients. Predictors of return-to-work included in logistic regression analyses were psychological (depressive and anxiety disorders, apathy, PBA, history of psychiatric treatment before stroke) and neuropsychological (memory, executive functioning) variables. Depression and anxiety were diagnosed using DSM-IV-TR or -5 criteria. Apathy was operationalized as diminished goal-directed behavior, reduced initiation and decreased interest that impacted daily life more than expected from physical issues after stroke (including self- and family-report using the Frontal Systems Behavior Scale [FrSBe]). PBA was defined by the Center for Neurologic Study-Lability Scale and clinical judgment based on chart review.
Results:
Post-stroke apathy persisted in 27.3% of patients 13 months after stroke, PBA persisted in 28.2% of patients (i.e., uncontrollable crying spellings not simply attributable to depression alone, uncontrollable laughing spells), anxiety disorders persisted in 18.6% of patients (mainly panic attacks), and PSD persisted in 29.8% of patients. Memory loss persisted in 67.4% of patients and executive difficulties persisted in 74.4% of patients. Thirteen months after stroke, 34.7% of individuals had returned-to-work and 47.1% had not returned-to-work. The other 18.2% were not working either at the time of their stroke or after the stroke. Logistic regression indicated that post-stroke apathy, PBA, and memory loss were significant predictors of not returning-to-work (odds ratio p < 0.001). Patients who experienced post-stroke apathy were 7.1 times more likely to not return-to-work after stroke (p=0.008), those who suffered from PBA were 4.8 times more likely to not return-to-work (p=0.028), and those with memory loss were 6.6 times more likely to not return-to-work (p=0.005). PSD, history of treatment for psychiatric issues before the stroke, presence of an anxiety disorder after stroke, and executive difficulties were not significant predictors (p’s>0.05).
Conclusions:
Results replicate the finding that return-to-work is hindered by residual cognitive deficits after stroke and extends previous research by clarifying the multifactorial emotional and behavioral barriers to not returning-to-work. Results highlight the importance of quantifying post-stroke apathy and pseudobulbar affect in a standard neuropsychological work-up after stroke to identify candidates for services to facilitate efforts in returning to work (e.g., vocational rehabilitation services, psychotherapy, interventions for decreased initiation).
This presentation discusses six patients with different problems, referred for rehabilitation, who challenged my views on how to apply neuropsychological principles to their treatment. We begin with Derek, who had sustained a traumatic brain injury from a gunshot wound. I was asked to reduce his weight, but he could not read or write because of the brain injury so I had to find another way to achieve the weight loss. This made me realize that neuropsychologists have to "think on their feet" and be flexible. The second patient is Kate, who developed brain stem encephalitis. Expected to die, and unable to speak, she convinced me that, however severe the injury, we should not give up and recovery can continue for many years. Kate, managed to speak intelligibly fourteen years after her illness! The next patient, Claire, a school nurse, had herpes simplex encephalitis which left her with prosopagnosia and extreme anxiety. Her story made me realize the personal consequences of prosopagnosia that is typically overlooked by most neuropsychologists. The fourth patient, Gary, was attacked by a gang while saving his father. He remained in a state of unconsciousness for 19 months and, thus, had a very poor prognosis. Nevertheless, he defied the predictions of all medical staff, woke up and did very well. The penultimate patient is Natasha, who, as far as we know is the only person in the world to have two syndromes, "Sheehans Syndrome" which is very rare in developed countries and "Sickle cell disease" which is not rare. As a result of the Sheehan's she developed Balint's Syndrome. Her case made me learn about Sheehan's Syndrome and accept that Natasha's main goal in life, was not what I expected it to be. The final patient is Paul, an opera singer, who was diagnosed with" Locked-in Syndrome" following a brain stem stroke. Not only was he a good communicator once a good system was found, but he felt he had a good quality of life by" living within his head". Although many of us feel that to be fully conscious but totally dependent on others, is a very cruel situation to be in, Paul did not feel this. All these patients taught me a great deal and I thank them for this.
Upon conclusion of this course, learners will be able to:
1. Describe the main purposes of neuropsychological rehabilitation
2. Discuss about six patients who challenged typical concepts about neuropsychological rehabilitation
3. Gain some knowledge about Sheehan's syndrome
4. Explain the three components of Balint's syndrome
5. Summarize the difference between Locked-in syndrome and the minimally conscious state
6. Recognize some of the anatomy associated with these syndromes
To characterize reasons for rehospitalization of Veterans and Service Members with mild, moderate, and severe traumatic brain injury (TBI) who received inpatient rehabilitation at a Veterans Affairs (VA) Polytrauma Rehabilitation Center (PRC) up to 10 years postinjury. TBI is a chronic condition, and a subset of TBI survivors experience rehospitalization after discharge from inpatient rehabilitation. Extant literature focuses primarily on persons with moderate-to-severe TBI and utilizes broad categories when determining readmission reasons. The present study aimed to delineate with greater specificity the reasons for rehospitalization up to 10 years postinjury across the TBI severity spectrum.
Participants and Methods:
Participants were drawn from the VA TBI Model Systems multicenter longitudinal study for a cross-sectional analysis. Eligibility criteria included TBI diagnosis per case definition; age > 16 years at TBI; admitted for inpatient rehabilitation at one of the five VA PRCs; and informed consent by the participant or legally authorized representative. At follow up interviews 1, 2, 5, and 10 years post-TBI, participants were asked whether they were rehospitalized within the past year (up to five admissions). Rehospitalizations were classified according to the Agency for Healthcare Research and Quality’s Healthcare Cost and Utilization Project classification (18 categories). In the present analyses, TBI severity was classified by duration of posttraumatic amnesia (PTA; 0-1 days=mild, 2+ days=moderate-severe). Statistical analyses were conducted in SPSS.
Results:
Participants (N=1101; n=338 0-1 days PTA, n=513 2+ days PTA, n=250 no PTA data) ranged in age from 17 years to 91 years at the time of interview. Participants across all follow up timepoints reported 317 rehospitalizations in the past year. 19.45% of Year 1 participants, 24.37% of Year 2 participants, 16.19% of Year 5 participants, and 16.25% of Year 10 participants reported 1+ rehospitalizations in the past year. When controlling for age, participants with at least 2 days of PTA were more likely to be rehospitalized at least once compared to those with 0-1 days of PTA at Year 2 (OR=4.05, p<0.001) and Year 5 (OR=2.39, p=0.03) post-TBI. The three most common reasons for rehospitalization across all timepoints were injury and poisoning (17.3%), mental illness (16.7%), and diseases of the nervous system and sense organs (9.1%). Mental illness was the modal reason for rehospitalization at Years 2, 5, and 10, frequently due to substance- or alcohol-related disorders and suicide/intentional self-inflicted injury.
Conclusions:
Compared to prior research, rates of rehospitalization were lower in this sample across follow-up time points. The inclusion of mild TBI in this analysis may partially explain the discrepancy. Importantly, two of the top three rehospitalization reasons are potentially preventable, and strategies to reduce risk of re-injury and minimize escalation of psychiatric distress should therefore be explored. Psychoeducation, supervision, and mental health support during the transition from hospital to community should be considered in order mitigate preventable causes of rehospitalization among long-term TBI survivors.
With participant recruitment being a top barrier to AD research progress, the rate of screen failure in Alzheimer’s disease (AD) clinical trials is unsustainable. Although steps have been undertaken to consider solutions to the continued recruitment shortage, there is unfortunately minimal emphasis on reducing screen failure rates based on study inclusion criteria. Here we present information attempting to understand the cognitive, emotional, and functional features of individuals who failed screening measures for AD trials.
Participants and Methods:
The current study is a retrospective, cross-sectional analysis. Thirty-eight participants (aged 50-83) having (1) previously received a clinical diagnostic workup at a transdisciplinary cognitive specialty clinic and (2) previously screened for a specific industry-sponsored clinical trial of MCI/early AD (EMERGE) met inclusion criteria. Previously collected clinical data were analyzed to identify predictors of AD trial screen pass/fail status.
Results:
Of the 38 participants in the current study, 14 screen passed into this AD clinical trial, and 24 screen failed. Higher screen failure rates were significantly related to gender, with 83% of female participants screen failing this AD trial versus 45% of male participants. There was no difference in age or education between screen pass/fail groups, nor were differences present for performance on visual or verbal memory tasks, or the MOCA. Conversely, those participants screen failing this AD clinical trial performed significantly worse on nonmemory cognitive domains pertaining to general fund of knowledge, working memory, and executive functioning. Additionally, the screen fail group reported greater levels of anxiety, but not depression nor endorsements on a measure of functional status.
Conclusions:
Worse performance on non-memory neuropsychological domains was related to screen failure status for the EMERGE AD clinical trial. This finding may be explained by the traditional recruitment pathway from clinic to trials, which beyond the diagnosis of interest is up to the opinion of the physician to determine “fit” for a trial. Higher screen failure rates may result from physicians erroneously viewing more globally-impaired patients as being more appropriate for an AD clinical trial, resulting in greater tendencies towards recruiting patients who are too severe to meet inclusion criteria for a trial. Recruiting patients into clinical trials earlier in their disease course - when disease severity is less - may result in reduced screen failure rates in AD trials. That we could not detect a relationship between memory-related tasks and screen fail/pass status may be explained by either (1) the measures used in the EMERGE trial were not as sensitive to subtle changes in memory, or (2) that memory dysfunction is necessary for a diagnosis of AD but not sufficient to distinguish who will be successfully screened into an AD clinical trial. Overall, these findings have the potential to advance the field by reducing screen failure rates in AD clinical trials by using information already available to clinical trial teams, which will enhance trial-recruitment infrastructure and encourage greater engagement of older adults in AD research.
Despite significant recent advances in test development in research settings, neuropsychological tests and normative data used in clinical settings have fallen behind in innovation in terms of empiricism and modality of administration (Bilder & Reise, 2019). Most widely-used test paradigms were initially developed 50-150 years ago with normative data that is often limited to White American-born monolingual English samples (Pugh et al., 2022; Rabin et al., 2016). Few digital tests have successfully translated into clinical use (Collins & Riley, 2016).
Participants and Methods:
Mayo Test Development through Rapid Iteration, Validation, and Expansion (Mayo Test Drive) is a remote platform for neuropsychological test development and self-administration that is accessible through any web-based device (Stricker et al., 2022). To date, we have demonstrated rapid validation and clinical translation of the SLS in native English speaking older adults and are now beginning cultural/linguistic adaptation for further validation, and clinical translation for Spanish speakers. Mayo Test Drive’s web-based platform captures all item-level data to allow future item level analysis and application of machine learning techniques.
Results:
The broader aim of Mayo Test Drive is to provide infrastructure to include more tests, adaptations, and normative datasets to ultimately improve access and utility for diverse patient populations. Mayo Test Drive currently includes two measures: Stricker Learning Span (SLS), a novel learning and recognition memory test, and Symbols Test, an open access processing speed measure (Stricker et al., 2022; Wilks et al., 2022). The SLS was designed with consideration of learning principles from cognitive neuroscience to enhance detection of the early decline in learning observed in preclinical Alzheimer’s disease (AD). The SLS uses computer adaptive testing to adapt task difficulty trial-by-trial (e.g., increasing word span) and uses a sensitive 4-choice format to test recognition memory for each word. The SLS underwent initial piloting in older females to determine psychometric properties, test-retest reliability, convergent validity with traditional measures, and criterion validity (e.g., neuroanatomical associations).
Conclusions:
Further validation and normative data development in the Mayo Clinic Study of Aging is ongoing, with additional criterion validation assessed by comparing brain PET (amyloid and tau) biomarker positive vs. negative groups. The SLS is equivalent to an inperson memory measure (AVLT), and the Mayo Test Drive composite including SLS and Symbols is superior to an in-person global screen (Short Test of Mental Status, like the MMSE) in distinguishing biomarker +/- older adults. To adapt the SLS for other languages/cultures, we have added community-based components to development (e.g., cognitive interviewing, additional piloting). We are beginning data-driven linguistic and remote cognitive interviewing approaches to develop an adaptation of the SLS for Spanish speakers. This study involves virtual focus groups with native Spanish speakers from different backgrounds (e.g., countries of origin, multilingualism) to examine the test paradigm, instructions, and items. Following piloting of the adaptations, next steps include normative data collection and clinical implementation. Future work involves in-person adaptation studies for lower/middle income countries including a collaboration with a Master’s level psychology graduate program in Grenada, West Indies to complete cognitive interviewing and pilot work with community members and stakeholders.
The prevalence of significant brain disorders and their economic burden are projected to continually increase as populations age longer. This review aims to analyze the barriers to international collaboration and propose preliminary international competency guidelines for the advancement of the neuropsychology field. Moreover, these guidelines can aid the field in advocating for international development and collaboration. Specifically, these guidelines may lead to clarity of services, culturally informed norms, cross-cultural research opportunities, and improving accessibility globally (Chan et al., 2016; Hessen et al. 2017).
Participants and Methods:
Literature between 2002 and 2022 was obtained by searching the Google Scholar and PubMed databases. Keywords such as guidelines, international, and neuropsychology were used. Articles were selected on the criterion of relevance to the objective, international perspectives, and current national guidelines. The remaining articles were reviewed, and themes were clustered to identify overlapping international competencies within the literature. The findings were utilized to create preliminary competency guidelines and discuss their future implications.
Results:
Covid-19 unveiled the feasibility of health service fields collaborating internationally to solve global problems (Bump et al. 2021). The pandemic is a call to action for the neuropsychology field to improve global health equity and collaboration to address international challenges (Obschonka et al., 2021). However, one barrier is the lack of globally accepted definitions of neuropsychology and what a neuropsychologist does (Grote et al., 2016). Yet, a way to address this is for international organizations to propose international competency guidelines. This may allow countries with less developed neuropsychology fields to advocate for legislation and services (Chan et al., 2016; Hessen et al., 2017). In addition, countries reported the need for competencies to advocate for advancing current practices (Chan et al., 2016; Hokkanen et al., 2020; Janzen & Guger, 2016). Notably, by developing guidelines, public understanding and competent practices of neuropsychology can be strengthened (Grote et al., 2016; Hessen et al., 2017). Temple and colleagues (2006) discovered the two largest barriers for physicians referring to neuropsychologists were a lack of familiarity with the field and geographical limitations. Therefore, international competency guidelines present a serendipitous opportunity to benefit clients, physicians, and neuropsychologists.
Therefore, the current study presents 10 international neuropsychology practice competencies and their elements. Foundational competencies include: (1)Scientific knowledge, methods, and evidence-based practice; (2)Individual and community diversity; (3)Ethics, legal standards, and policy; (4)Interdisciplinary systems; (5)Reflective practice; (6)Therapeutic relationships. Functional competencies include: (7)Assessment; (8)Intervention; (9)Consultation; (10)Advocacy.
Conclusions:
Although training and regulations may differ internationally, emerging literature supports the establishment of global competencies. Despite data on competencies in many countries being unavailable, the need for services in many locations suggests that using the available data to implement guidelines may allow for the growth of consistently competent neuropsychologists to serve the many underserved populations around the world (Hessen et al., 2017). Fortunately, Covid-19 exposed the need for increased health equity and mental health services globally (Jensen et al., 2021). Ultimately, the international competencies presented should be investigated further to improve international neuropsychology research, practice, advocacy, and legislation to abate global disparities.
Previous investigations have demonstrated the clinical utility of the Delis-Kaplan Executive Function System (D-KEFS) Color Word Interference Test (CWIT) as an embedded validity indicator in mixed clinical samples and traumatic brain injury. The present study sought to cross-validate previously identified indicators and cutoffs in a sample of adults referred for psychoeducational testing.
Participants and Methods:
Archival data from 267 students and community members self-referred for a psychoeducational evaluation at a university clinic in the South were analyzed. Referrals included assessment for attention-deficit hyperactivity disorder, specific learning disorder, autism spectrum disorder, or other disorders (e.g., anxiety, depression). Individuals were administered subtests of the D-KEFS including the CWIT and several standalone and embedded performance validity indicators as part of the evaluation. Criterion measures included The b Test, Victoria Symptom Validity Test, Medical Symptom Validity Test, Dot Counting Test, and Reliable Digit Span. Individuals who failed 0 criterion measures were included in the credible group (n = 164) and individuals failing 2 or more criterion measures were included in the non-credible group (n = 31). Because a subset of the sample were seeking external incentives (e.g., accommodations), individuals who failed only 1 of the criterion measures were excluded (n = 72). Indicators of interest included all test conditions examined separately, the inverted Stroop index (i.e., better performance on the interference trial than the word reading or color naming trials), inhibition and inhibition/switching composite, and sum of all conditions.
Results:
Receiver Operating Characteristics (ROC) curves were significant for all four conditions (p < .001) and the inverted stroop index (p = .032). However, only conditions 2, 3 and 4 met minimal acceptable classification accuracy (AUC = .72 - 81). ROC curves with composite indicators were also significant (p < .001), with all three composite indicators meeting minimal acceptable classification accuracy (AUC = .71- .80). At the previously identified cutoff of age corrected scale score of 6 for all four conditions, specificity was high (.88 -.91), with varying sensitivity (.23 - .45). At the previously identified cutoff of .75 for the inverted stroop index, specificity was high (.87) while sensitivity was low (.19). Composite indicators yielded high specificity (.88 - .99) at previously established cutoffs with sensitivity varying from low to moderate (.19 - .48). Increasing the cutoffs (i.e., requiring higher age corrected scale score to pass) for composite indicators increased sensitivity while still maintaining high specificity. For example, increasing the total score cutoff from 18 to 28 resulted in moderate sensitivity (.26 vs .52) with specificity of .91.
Conclusions:
While a cutoff of 6 resulted in high specificity for most conditions, the sum of all four conditions exhibited the strongest classification accuracy and appears to be the most robust indicator which is consistent with previous research (Eglit et al., 2019). However, a cutoff of 28 as opposed to 18 may be most appropriate for psychoeducational samples. Overall, the results suggest that the D-KEFS CWIT can function as a measure of performance validity in addition to a measure of processing speed/executive functioning.
Many individuals who experienced a mild traumatic brain injury (mTBI) have persistent cognitive complaints. Traditional cognitive rehabilitation (TCR) interventions were primarily developed for severe neurological injury which has limited effectiveness in rehabilitation of active duty military personnel who have the goal of returning to full military operational status. To remain on active duty, warfighters must have sufficient mental competency to safely and effectively function in complex environments such as combat. There is need for a cognitive rehabilitation approach that addresses demands of military personnel to expedite return to duty. The Strategic Memory Advanced Reasoning Training (SMART) program is novel alternative to TCR. SMART is an evidence-based advanced reasoning protocol that enhances cognitive domains essential to military readiness (e.g., mental agility, strategic learning, problem solving, and focus) and requires less than half of the treatment time. The objective of this study was to assess the efficacy of SMART compared to TCR in terms of overall recovery as well as change in specific cognitive domains.
Participants and Methods:
Participants were recruited from a military treatment facility. All patients had at least one diagnosed mTBI as well as persistent cognitive complaints. Participants completed the Rey-15 to ensure performance validity. Final sample was SMART n = 28 and SCORE n = 19. Primary dependent measure was the Global Deficit Scale (GDS). GDS was calculated from: Hopkins Verbal Learning Test-Revised (HVLT-R); Delis Kaplan Executive Functioning System Color Word (CW) and Trail Making (TM), Paced Auditory Serial Addition Test (PASAT), and the Symbol Digit Modality Test (SDMT). Demographically corrected t-scores were converted to deficit scores as follows: >40 = 0, 35-39 = 1, 30-34 = 2, 25-29 = 3, 20-24 = 4, <20 = 5. Deficit scores were averaged to calculate GDS. For each measure, Hohen’s g was analyzed for effect size comparisons pre-post treatment.
Results:
Average number of treatment hours was significantly lower in the SMART condition (SMART: M = 18.47 hours, SD = 2.17; TCR: M = 42.42 hours, SD = 3.79, p <.001). A repeated measures ANOVA showed a significant change on GDS post-treatment (F = 30.25, p < .001) with a large effect size (n2 = .402); however, the interventions did not differ on GDS change. Impact on cognitive domains was relatively equivalent for processing speed (SMART h = 0.67 vs TCR h = -.54) and executive function (SMART h = -0.92 vs TCR h = -.85); however, SMART had a larger impact on memory (SMART h = -0.81 vs TCR h = -.39). SMART resulted in large improvements in retention and recognition memory which were minimally impacted by TCR.
Conclusions:
Both TCR and SMART had comparable effectiveness in improving cognitive impairment, though SMART was completed in less than half of the treatment time. Both interventions had large effect sizes on processing speed and executive functioning; however, SMART was more effective in improving long-term memory. Memory is an integral part of military readiness. Further investigation is required to determine the relative effectiveness of these two approaches to improving cognitive readiness of the warfighter.
The strong coupling interactions of non-equilibrium flow, microscopic particle collisions and radiative transitions within the shock layer of hypersonic atmospheric re-entry vehicles makes accurate prediction of the aerothermodynamics challenging. Therefore, in this study a self-consistent non-equilibrium flow, collisional–radiative reactions and radiative transfer fully coupled model are established to study the non-equilibrium characteristics of the flow field and radiation of vehicle atmospheric re-entry. The comparison of the present calculation results with flight data of FIRE II and previous results in the literature shows a reasonable agreement. The thermal, chemical and excited energy level non-equilibrium phenomena are obtained and analysed for the different FIRE II trajectory points, which form the critical basis for studying the heat transfer and radiation. The non-equilibrium distribution of excited energy levels significantly exists in the post-shock and near-wall regions due to the rapid vibrational dissociation and electronic under-excitation, as well as the wall catalytic reactions. The analysis of stagnation-point heating of FIRE II illustrates that the translational–rotational convection and the dissociation component diffusion play key roles in the aerodynamic heating of the wall region. The spectrally resolved radiative intensity in the entire flow field indicates that the vacuum ultraviolet radiation caused by the high-energy nitrogen atomic spectral lines makes the main contribution to the radiative transfer. Finally, it is found that the non-equilibrium flow–radiation coupling effect can exacerbate the excited energy level non-equilibrium, and further affect the gas radiative properties and radiative transfer. This fully coupled study provides an effective method for reasonable prediction of atmospheric re-entry flow and radiation fields.