We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Medical providers often express difficulty in detecting dementia (Bradford et al., 2009), feel ill-equipped to address issues related to dementia care, or neglect to communicate dementia diagnoses and treatment recommendations (Alzheimer’s Association, 2015). Despite this, neuropsychologists, who are specifically trained in dementia diagnosis and treatment planning, are not always utilized in the dementia care process. The objective of this study was to investigate family members’ perceptions regarding the incremental benefit of the neuropsychological evaluation, relative to previously rendered services, in addressing patient diagnosis/symptoms and in discussing future care plans.
Participants and Methods:
A survey questionnaire was distributed to family members of patients who had undergone a neuropsychological dementia workup at a university medical center in the Midwest by one of five neuropsychology providers. Immediately following the neuropsychological feedback session, family members were provided a $10 gift card and a stamped, pre-addressed envelope to return the survey anonymously by mail. Respondents were typically spouses (60.6%) or adult children (29.1%), with 82.4% identifying themselves as the primary caregiver. Patient age ranged from 52 to 92 years (M=73.4). Sixty-seven percent of patients were diagnosed with dementia and 24% with mild cognitive impairment; 9% were not diagnosed with a cognitive disorder. The most commonly suspected etiology for cognitive impairment was Alzheimer’s disease or mixed Alzheimer’s and vascular disease (46%). Providers noted as previously having been involved in the care of the patients’ cognitive symptoms included primary care providers (88%), neurologists (60%), psychiatrists (13%), and psychologists (9%).
Results:
Two-hundred surveys were disseminated with a response rate of 64% (n=127). Family members were asked to compare the benefit of the neuropsychological evaluation in addressing the patients’ symptoms as compared with services rendered by previous providers using a Likert scale ranging from 1 (not beneficial) to 5 (extremely beneficial). The average benefit rating was 4.6/5.0 (SD=0.7) for the neuropsychological evaluation as compared with 3.0/5.0 (SD=1.1) for previous services, a statistically significant difference (p <.001). Family members were also asked to rate the helpfulness of both the neuropsychologist and previous providers in discussing aspects of the patient’s diagnosis and care plan using a Likert scale ranging from 1 (not helpful) to 5 (extremely helpful). Comparison using Wilcoxon signed-rank tests indicated neuropsychologists were rated as significantly more helpful than previous providers (p < .001) in discussing the cause or diagnosis for their family member’s symptoms (M=4.6/5.0 vs. M=3.0/5.0), strategies for providing care to their family members (M=4.5/5.0 vs. M=2.8/5.0), a comprehensive treatment and care plan (M=4.3/5.0 vs. M=2.6/5.0), symptom impact on activities of daily living (M=4.4/5.0 vs. M=2.9/5.0), and symptom impact on current and future functioning (M=4.4/5.0 vs. M=2.8/5.0).
Conclusions:
Overall, family members reported the neuropsychological evaluation and feedback session to be significantly more helpful in addressing patient cognitive diagnoses, symptoms, and care plan as compared to previously rendered services by non-neuropsychologists. The results underscore the unique and incremental benefit of the neuropsychological evaluation, not only in diagnosis, but also in assisting family in understanding symptom nature, functional impact, and resultant care needs.
Women are at greater risk of developing Alzheimer’s disease (AD) than men. The menopausal transition, which involves a neuroendocrine shift, is a potential contributor to this sex difference. Multiple estrogen-regulated systems (i.e., circadian rhythms) are disrupted during this transition which affects cognitive functioning (Barha & Liu-Ambrose, 2020), most notably verbal learning and memory. Little is known about how lifestyle factors (i.e., sleep, physical activity (PA), stress) may promote neurocognitive functioning across this transition (Maki & Weber, 2021). Utilizing data from the Human Connectome Aging project (HCP-A), the current study will examine whether distinct lifestyle profiles including sleep, PA, and stress relate to multiple domains of cognitive performance among a sample of perimenopausal/menopausal women.
Participants and Methods:
Perimenopausal/menopausal women (ages 45 to 65) from the HCP-A were included (n =150, M age = 54.6, SD = 5.5). Demographic information, menopausal status, sleep problems (Pittsburgh Sleep Quality Index), PA (International Physical Activity Questionnaire), stress (Distress subscale of the Perceived Stress Scale) were assessed with surveys, and participants completed several lab-based tasks including: dimensional change card sort (DCCS), flanker, pattern recognition processing speed (PS), working memory (WM), picture sequencing, oral reading, Trails Making Test A and B (TMT), and Rey Auditory Verbal Learning (RAVLT) tasks. Using latent profile analysis (LPA), lifestyle profiles were identified via sleep problems, PA, and stress levels. A MANOVA compared cognitive performance between these lifestyle profiles, above and beyond age and education status.
Results:
Fit indices indicated that a three-class solution fit the sample best: high PA, low stress and sleep problems (Class 1, n=38), high PA, stress, and sleep problems (Class 2, n= 17), and low PA, high stress and sleep problems (Class 3, n= 95) which were not significantly different based on age or menopausal status (p>0.05). A significant multivariate effect of age and education on cognitive performance (p<.001) emerged. There was a significant multivariate effect of lifestyle profile on cognitive performance, F (18, 260) = 1.73, p=.034, eta squared = .11, after controlling for age and education. Univariate analyses determined that certain lifestyle profiles were associated with better performance on all cognitive tasks except verbal memory. Contrary to expectation, Class 3 performed better on TMT A & B, DCCS, flanker, WM, and PS tasks as compared to Class 1. Class 3 performed better on reading and picture sequencing tasks than Class 2. There was no difference in performance between Class 1 and 2.
Conclusions:
Results suggest three distinct lifestyle profiles exist in this analytic sample. After controlling for age and education, cognitive performance on all tasks except for verbal memory significantly differed between lifestyle profiles. The profile characterized by low PA and high stress and sleep problems demonstrated superior performance as compared to other classes. These findings provide preliminary evidence that women who have high levels of stress and sleep problems with low PA are performing better on cognitive tasks, but replication of these findings utilizing longitudinal designs are needed.
An individual with dementia suffers from cognitive decline affecting not only memory but at least one of the other domains, such as personality, praxis, abstract thought, language, executive functioning, attention, and social skills. Further, the severity of the decline must be significant enough to interfere with daily functions. It is currently unknown whether any of the causes of dementia can be cured. Many challenges confront patients and their families, including a lack of knowledge about dementia and associated treatments; therefore, it is essential to study illness perception regarding dementia-related symptoms in order to improve psychoeducation and lower barriers to seeking assistance. How individuals perceive and make sense of early dementia symptoms can significantly impact their help-seeking behaviors (HS). Exploring illness-perception regarding dementia-related symptoms may contribute to the development of strategies for increasing HS, early diagnosis, and intervention. The objective of this study is to describe aspects of illness perception in cognitively healthy older adults and examine potential correlations with demographic variables, including age, gender, and education.
Participants and Methods:
The cohort comprised 55 cognitively healthy older adults enrolled in a study examining Subjective Cognitive Decline. All participants performed > -1.5 SD on clinical neuropsychological testing. Participants were 70% female and 30% male; and self-identified as White = 78%, Black = 16%, Asian = 2%, Other = 4% and Non-Hispanic = 98%. Participants read a short vignette describing a person experiencing significant memory issues representative of an individual with mild dementia and answered seven follow-up questions regarding the cause of memory problems, the likely course of memory problems, and potential treatments for memory problems. Chi-square analyses examined the endorsement of items in relation to age, gender, and education.
Results:
When asked about the likely cause of memory problems, 65% of participants endorsed neurologic disease, 53% of participants endorsed normal aging, 26% endorsed stress, 25% endorsed genes, 4% endorsed fate/luck, and 16% endorsed "Don't know" for likely cause of symptoms. 64% of participants responded "will get worse", 18% "will go up and down", 16% "Don't know", and 2% "Other" in response to the progression of memory problems over time. For "Can he do anything to help [memory problems]?", only 2% responded "No" while 76% responded "Yes" and 22% endorsed "Don't know". On a follow-up question regarding ways an individual could improve his cognitive difficulties, 78% "Social Engagement", 73% "Exercise", 64% endorsed "Medication", 48% "Diet", 42% Psychological Treatment", 29% "Rehabilitation", 9%" Don't know" and 15% "Other." Lastly, 58% of participants reported "Independence", 33% "Identity," 4% "Friends," 4% "Respect," and 1% "Don't know" for things he may risk losing due to memory problems. Age, gender, and education were not associated with any of the above responses (p > .05).
Conclusions:
Older adults demonstrate a range of ideas about the cause, course, and potential treatment for memory disorders. Understanding how and what factors impact illness perception is a pivotal step in improving illness perception and ultimately narrowing the gap in health disparities and HS. Further work in a large demographically representative sample is needed on illness perception and how socioeconomic factors, ethnicity, and other mediators interact with its impact on HS for dementia-related symptoms.
The International Classification of Cognitive Disorder in Epilepsy (IC-CoDE) is a new consensus-based taxonomy that classifies patients into one of four cognitive phenotypes (i.e., cognitively intact, single-domain impairment, bi-domain impairment, generalized impairment). The IC-CoDE has been effectively applied to patients with temporal lobe epilepsy (TLE), but little is known about the relationship between pre-operative cognitive phenotype and post-operative cognitive outcome following epilepsy surgery. The purpose of this study was to examine whether the IC-CoDE classifications are related to memory decline following surgery for TLE.
Participants and Methods:
347 patients (ages 16-66; 57% female) with pharmacoresistant TLE completed comprehensive pre- and post-surgical neuropsychological assessments. Patients were classified into IC-CoDE phenotypes based on pre-surgical pattern of cognitive impairment using a threshold of >1.5 standard deviations (SD) below the normative mean. Change scores were calculated from delay trial scores of the following memory tests: Rey Auditory Verbal Learning Test (RAVLT), and Logical Memory (LM) and Verbal Paired Associates (VPA) subtests from the Wechsler Memory Scale - Third Edition (WMS-III). Cutoffs were applied using epilepsy-specific reliable change indices and patients were classified within the ‘decline’ group if they experienced significant decline on any of the three memory measures.
Results:
The distribution of IC-CoDE phenotypes in our sample were as follows: 57% intact, 29% single-domain, 10% bi-domain, and 5% generalized impairment. 108 patients (31%) demonstrated post-surgical memory decline. Patients who underwent dominant temporal lobectomy were more likely to show post-surgical memory decline compared to non-dominant temporal lobectomy. However, there was no significant difference in phenotype distribution between patients who underwent left versus right-sided resections; thus, analyses were conducted on the entire sample to increase power. Chi-square analyses revealed unique patterns of post-surgical memory decline across phenotypes, X2 = 8.79, p = .032. There was a significantly higher proportion of patients with memory decline in the single-domain phenotype (39%) and this was followed by the bi-domain phenotype (33%) and the intact phenotype (29%). In contrast, patients with generalized impairment were unlikely to show memory decline (.06%). Within the single domain impaired phenotype, there were no differences between the specific domains impaired and memory decline. Logistic regression model was also significant; after controlling for surgery side, the IC-CoDE phenotypes significantly predicted the likelihood of a patient experiencing post-surgical memory decline; X2 = 8.18, p = .043.
Conclusions:
In addition to the IC-CoDE providing a useful cognitive classification scheme in epilepsy, the IC-CoDE phenotypes appear helpful in identifying those at risk for post-operative memory decline. Previous literature has suggested that those with better pre-surgical cognition are generally at highest risk for cognitive decline. Our results generally follow this trend, but interestingly, patients with single domain impairment were at the highest risk of memory decline, even above those in the cognitively intact group. Future studies are important to confirm this pattern in other samples and examine additional contributing factors and underlying mechanisms that may influence risk of memory decline across these cognitive phenotypes.
White matter hyperintensities (WMH) are a radiological marker of small vessel cerebrovascular disease that are related to cognition and memory decline in aging and Alzheimer’s disease (AD). However, the mechanisms that link WMH to memory impairment and whether they interact with or act independently of AD pathophysiology are unclear. The transentorhinal cortex (BA35) is among the earliest anatomical regions to show tau deposition and subsequent atrophy, and baseline posterior WMH is related to longitudinal cortical thinning of the entorhinal cortex. However, it is unclear whether regional WMH are related to BA35 volume specifically, and whether this relationship is influenced by amyloid-β (Aβ) burden. We hypothesized that WMH in the vascular territory of the posterior cerebral artery (PCA), which perfuses both posterior and medial temporal lobe regions, would be associated with reduced BA35 volume and with lower memory in older adults independently of Aβ.
Participants and Methods:
114 older adults without dementia, aged 60 to 98 years (mean (SD) = 78.31 (11.02), 71 (62.8%) women), were included. Regional WMH volumes were derived from T2-FLAIR images using ANTs, a vascular territory atlas and manual editing. Global Aβ was assessed with 18F-florbetapir PET, using SUVR of a cortical composite region (FBP mean SUVR) with a cerebellar reference region. Total transentorhinal (BA35) volume was derived using T1 and T2-weighted images using ASHS. To assess hippocampal pattern separation ability, an index of episodic memory, participants completed both object (MDT-O) and spatial (MDT-S) versions of a mnemonic discrimination task, with the lure discrimination index as the outcome. Using linear regressions, we first tested for associations among PCA-defined WMH, Aβ, BA35 volume, and MDT-S and MDT-O scores. We then tested whether the relationship between PCA-defined WMH and MDT-O performance was mediated by BA35 volume and whether this mediation was moderated by Aβ. All models adjusted for age, sex, and education.
Results:
PCA-defined WMH were related to higher FBP mean SUVR (b=0.287, p=0.042) and lower BA35 volume (b=-0.222, p=0.038). PCA-defined WMH were also negatively related to MDT-O performance (b=-0.229, p=0.044), but not to MDT-S (b=-0.171, p=0.118). FBP mean SUVR was not related to BA35 volume (b=-0.131, p=0.344) or MDT performance (MDT-S: b=-0.138, p=0.348; MDT-O: b=0.059, p=0.690). Furthermore, FBP mean SUVR did not interact with PCA-defined WMH to predict memory performance (interaction b=-0.039, p=0.973), nor BA35 volume (interaction b=-0.140, p=0.894). The association of PCA-defined WMH to MDT-O was fully mediated by BA35 volume (indirect effect b=-0.0005, 95% CI (-0.0014, -0.0003)). This mediation was not moderated by FBP mean SUVR (indirect effect b=-0.00001, 95% CI (-0.001, 0.001)).
Conclusions:
We found that PCA-defined WMH were related to memory performance in older adults, and this association is fully mediated by transentorhinal volume. While PCA-defined WMH are related to higher global Aβ burden, there is no interaction between PCA-defined WMH and Aβ on BA35 volume. These findings point to an amyloid-independent vascular pathway towards memory decline in aging and AD. Future work should examine whether the pathway linking PCA-defined WMH to transentorhinal cortex atrophy and subsequent memory decline is mediated by regional tau pathology.
Prior studies have determined the Apolipoprotein-E (ApoE) ε4 allele presents a greater risk for developing Alzheimer's disease and for earlier onset of cognitive decline compared to individuals without the gene. Research has also recognized that traumatic brain injuries (TBIs) with loss of consciousness increase the risk for earlier development of the disease. This study sought to determine the moderating factor of TBI history on ApoE-ε4 risk associated with earlier Alzheimer's disease onset.
Participants and Methods:
Participants included 9,585 individuals with autopsy confirmed Alzheimer's disease pathology, that had available ApoE genotype data, TBI data, and clinician determined age of cognitive decline, representing disease onset. A 2x3 factorial ANOVA was conducted to compare the main effects of ApoE-ε4 status and TBI history and the interaction effect between the two on disease onset. The analyses used three ApoE-ε4 groups and two TBI groups. The groups included: (1) no ApoE-ε4 allele; (2) one ApoE-ε4 allele; (3) two ApoE-ε4 alleles; (4) no TBI history, (5) positive TBI history.
Results:
Results indicated a significant interaction effect between ApoE-ε4 status and TBI history. Secondary analyses determined the driving force behind the interaction was the effect of ApoE-ε4, which had a significant impact on the age of onset in both TBI groups, while TBI history only significantly impacted onset in individuals without an ApoE-ε4 allele.
Conclusions:
Contrary to prior research, these findings did not indicate TBI was significant in determining earlier onset. However, it is important to consider the large variability within the TBI group from the lack of differentiation between mild, moderate, and severe TBIs. Overall, these findings underline the greater risk and stronger impact that ApoE-ε4 poses for Alzheimer's disease onset compared to TBI. The results of this study emphasize the importance of evaluating ApoE-ε4 status for determining risk of earlier onset AD. Clinicians can better determine risk by considering patients' ApoE-ε4 status alongside TBI history.
Socioeconomic status (SES) has been recognized as an important factor in psychological research within the last few decades. Past literature recognizes that having lower SES can have a negative impact on many aspects of one’s health, especially in diseases related to brain aging. A recent avenue for research regarding SES and the brain-behavior relationship suggests that socioeconomic status can act as a moderator for brain activation during task performance. The hypothesis for this project was that there will be a negative correlational relationship between brain activity and SES when controlling for participants’ age, sex, and performance on the episodic memory tasks, but a positive correlation between task performance and SES was expected.
Participants and Methods:
With 100 middle-aged healthy adults from the Reference Ability Neural Network (RANN) study (53 male and 47 female, age M=48.0 +/- 7.55 years), three episodic memory fMRI tasks were performed and studied in relation to SES and age. The tasks were Logical Memory, Word Order, and Pair Associates tasks that involved episodic memory for story details, order of words presented, and pairing of words, respectively. We quantified memory performance with average accuracy from performance of the three tasks. We used the FSL software to preprocess and perform voxel-wise group analysis. All brain activation analyses were corrected for multiple comparison using cluster thresholds in FSL.
Results:
Correlation between SES and memory performance was found to be marginally significant (R=.188, p=.061). All tasks had areas of positively correlated activation for age. The Logical Memory task had multiple areas of brain activation that were positively correlated with age, particularly at the lateral occipital cortex, lingual gyrus, and the occipital fusiform gyrus, all areas that underlie visual processing. There were no areas of correlated brain activation for SES, sex, and task performance for the Pair Associates and Logical Memory tasks. Brain activation for the Word order task in the left precuneous cortex and the right middle frontal gyrus, left lateral occipital cortex, left occipital fusiform gyrus, and parts of the lingual gyrus was positively correlated with memory performance when controlled for age, sex, and SES.
Conclusions:
The hypothesis was not entirely supported by the results of this study, but the marginal effect between SES and memory performance can suggest that SES may affect memory performance within middle-aged adults. While we did not find a brain association with SES in this age group, we observed regions that underlie task performance. Further research can be done on possible moderating effects of Socioeconomic Status on memory and executive function with structural neuroimaging to further investigate the effects of SES on cognition.
Over 80% of hospitalized COVID-19 patients have neurological symptoms, including memory loss, attention difficulties, and trouble thinking clearly that can last for months. The long-term neurological impact of the SARS-CoV-2 virus is unknown and it remains to be seen whether it would create a surge in cases of dementia and cognitive decline years later, which is already a global public health challenge. Examining the cognitive effects of the virus will help with understanding its impact on the brain and inform treatment options. The goal of the present study was to examine cognitive performance among those who have had COVID-19 via mobile-based assessments using smartphone-based cognitive tests. Participants with a previous COVID-19 diagnosis (COVID+) were expected to have worse cognitive performance at baseline than those without COVID-19 (COVID-).
Participants and Methods:
Participants (n=23) with self-reported positive or negative COVID-19 statuses based on polymerase chain reaction or antigen testing were recruited from the Boston area. Inclusion criteria included access to a smartphone with an Android or iOS operation
system and to internet connectivity, along with proficiency in English. Cognitive performance was measured using Defense Automated Neurobehavioral Assessment (DANA) from AnthroTronix. Welch’s 2-sample t-test was used to compare cognitive performance among those with and without COVID-19.
Results:
The sample was comprised primarily of COVID+ (59%), female (59%), and Caucasian (50%) participants that were generally well educated (77% with a bachelor’s degree), and had >1 COVID vaccination (95%). About 50% of the sample reported symptoms of depression and mild anxiety. Results were not indicative of significant differences between COVID+ and COVID- groups at baseline: Simple Reaction Time (Immediate; M = 5.62; p = 0.81), Code Substitution (M = 1.25; p = 0.77), Procedural Reaction Time (M = -7.26; p = 0.49), Spatial Processing (M = -3.14; p = 0.50), Go No Go (M = -1.37; p = 0.89), Match to Sample (M = 2.00; p = 0.57), Memory Search (M = -2.62; p = 0.75), and Simple Reaction Time (Delayed; M = 2.99; p = 0.81).
Conclusions:
Results indicate that cognitive performance at baseline does not differ based on COVID status, emphasizing the need for examination of longitudinal cognitive performance. Future directions include examining the impact of COVID disease severity and reinfection on cognition.
Individuals with chronic pain frequently report diminished cognitive functioning. Prior cross-sectional studies have demonstrated strong associations between chronic pain and neurocognitive impairment, most notably in memory, attention, processing speed, and executive functioning. However, there is a paucity of research evaluating visual learning and memory abilities in this population. Further, while current practice standards advocate for the use of performance validity tests (PVTs) to assess the credibility of neuropsychological test performance, they have infrequently been incorporated into studies examining chronic pain samples, despite a higher observed rate of noncredible performance in the literature. This study aimed to compare visual learning and memory performance between a mixed neuropsychiatric (MNP) group and a chronic pain group in a validity-controlled sample.
Participants and Methods:
The study consisted of 371 adults referred for outpatient neuropsychological evaluation. Between groups, various PVTs were administered, which included, at minimum, one freestanding and four embedded PVTs. All patients were administered the Brief Visuospatial Memory Test-Revised (BVMT-R) as part of a comprehensive neuropsychological evaluation. Only patients classified as valid performers (<1 PVT fails; n=295) were included in the analyses (Pain: n=109; MNP: n=186). The overall sample was 69% female and racially diverse (22% non-Hispanic Black, 43% non-Hispanic White, 30% Hispanic, 3% Asian/Pacific Islander, and 2% other race/ethnicities), with a mean age of 46.8 (SD=14.8) and mean education of 13.7 years (SD=2.7). Independent samples t-tests were performed to investigate the differences in visual learning and memory abilities between the chronic pain and MNP groups.
Results:
Chi-square analyses revealed significant differences between the pain and MNP groups on race, with more non-Hispanic White and Hispanic patients represented in the MNP group. There were also modest group differences in age and education. For the chronic pain group, patients scored lower on both BVMT-R Total T-Score (mean difference = 9.65T, p<.001) and BVMT Delayed Recall T-Score (mean difference = 8.97T, p<.001). The effect size was robust for both for Total T-Score (d = 0.682) and Delayed Recall T-Score (d = 0.632). In contrast, the difference in BVMT Recognition Discriminability was not statistically significant.
Conclusions:
This study demonstrated significant differences in performance between mixed neuropsychiatric and chronic pain patients. Preliminary evidence indicated that chronic pain patients displayed lower visual mediated encoding and retrieval performance, although their recognition is comparable. Although the nature of this study was targeted toward visual learning and retrieval, it is likely that the known impact of chronic pain on attention, working memory, and processing speed accounts for this relationship. Future studies will benefit from further elucidating these potential mechanisms and better inform clinical decision-making and neuropsychological testing performance in patients with chronic pain.
Growing evidence indicates that COVID-19 infection adversely impacts cognitive functioning, with COVID-19 patients demonstrating high rates of objective and subjective cognitive impairments (Daroische et al., 2020; Miskowiak et al., 2021). Given the prevalence and potentially debilitating nature of post-COVID-19 cognitive symptoms, understanding factors that mitigate the impact of COVID-19 infection on cognitive functioning is paramount to developing interventions that facilitate recovery. Resilience, the ability to cope with and grow from challenges, has been associated with improved cognitive performance in healthy adults and linked to decreased perceived cognitive difficulties in post-COVID-19 patients (Connor & Davidson, 2003; Deng et al., 2018; Jung et al., 2021). However, resilience has not yet been examined as a potential attenuator of the relationship between COVID-19 and either perceived or objective cognitive function. This study aims to investigate the role of resilience as a protective factor against experience of cognitive function difficulties in COVID-19 patients by probing the role of resilience as a moderator of the relationship between COVID-19 diagnosis and cognitive functioning (both perceived and objective).
Participants and Methods:
Participants (mean age=36.93, 30.10% male) were recruited from British Columbia and Ontario. The sample included 53 adults who had never been diagnosed with COVID-19 and 50 adults diagnosed with symptomatic COVID-19 at least three months prior and not ventilated. Participants completed online questionnaires (n=103) to assess depression (the Center for Epidemiological Studies Depression Scale), anxiety (7-item Generalized Anxiety Disorder Scale), subjective cognitive functioning (The Subjective Cognitive Decline Questionnaire), and resilience (2-item Connor-Davidson Resilience Scale). Participants then completed neuropsychological tests (n=82) measuring attention, processing speed, memory, language, visuospatial skills, and executive function via teleconference, with scores averaged to create a global objective cognition score. Moderated multiple regression was employed to assess the impact of resilience on the relationship between COVID-19 diagnosis and both objective and perceived cognition, controlling for gender, ethnicity, income, age, anxiety, and depression.
Results:
Average scores in the COVID-19 group exceeded diagnostic cut-offs for clinical depression (M=16.67, SD=10.77) and mild anxiety (M=5.27, SD=4.99), while the control group scored below diagnostic thresholds for depression (M=11.96, SD=9.76) and mild anxiety (M=4.48, SD=5.07). Controlling for sociodemographic and mental health characteristics, COVID-19 diagnosis was not associated with objective global cognitive functioning (b=-.07, se=1.71, p=.624) or subjective cognitive functioning (b=.16, se=1.32, p=.12), nor was resilience associated with objective global cognitive functioning (b=.19, se=1.50, p=.44) or subjective cognitive functioning (b=-.02, se=1.09, p=.89).
Conclusions:
Findings indicate that COVID-19 patients may be at risk for depression and anxiety. Results of this study fail to support a relationship between COVID-19 and cognitive functioning beyond the impact of sociodemographic and mental health variables. Thus, the role of resilience as a protective factor against COVID-19 related cognitive difficulties could not be fully explored. However, findings should be considered in the context of study limitations, including a small sample size. Future research should employ larger samples to further examine the relationship between COVID-19 infection and cognition, focusing on mental health characteristics and resilience as potential risk and protective factors.
The goal-control model of the functional impairment in dementia posits two different underlying mechanisms: decay of task goals (reduced task accomplishment) and poor control over goal execution (high error rates). Here we present a case series in which we explore the effects of a performance-based, functional intervention on two participants. Outcomes were evaluated using the goal-control framework.
Participants and Methods:
Two participants with dementia (CS: age 70, 14 years of education; EM: age 93, 18 years of education) completed neuropsychological tests (scored using age, education, and IQ-adjusted norms) and baseline testing with the Naturalistic Action Task (NAT; a validated performance-based task of everyday function including a Breakfast and Lunch task). The Virtual Kitchen (VK) was used to train, through repeated performance, either the Breakfast (CS) or Lunch (EM) tasks for 30 minutes (or 10 total repetitions) per day over 5 days. After VK training, participants performed the NAT Breakfast and Lunch tasks again to evaluate improvement on the trained and untrained tasks. Baseline and post-training NATs were scored for task accomplishment and errors by two coders observing video recordings. Z scores were derived by calculating accomplishment and error change scores for each participant relative to the mean and standard deviations of change scores from a cohort of 36 healthy controls (mean age: 73.3, SD: 6.44; mean education: 17.42, SD: 2.17).
Results:
Both participants exhibited similar cognitive profiles: high estimated IQ; low MMSE (total = 19 for both CS and EM; 1st percentile); anterograde amnesia, slowed processing speed and impaired executive function; average scores on tests of attention, language, and self-reported depression. Informant report of daily functioning (FAQ) suggested that EM (FAQ=28) exhibited greater functional impairment than CS (FAQ=9). Both participants completed all VK training sessions. Z scores of the change from pre- to post-training showed significant increases in task accomplishment on the trained task (trained condition change z scores: EM = +27.69; CS =+ 6.06), but significantly less improvement or worse task accomplishment on the untrained task (untrained condition change z scores: EM = +4.06; CS = -13.69). The training did not reduce errors, as error rates increased for both participants on the trained task.
Conclusions:
The participants presented in this case study exhibited comparable cognitive profiles, including marked anterograde amnesia. Our results suggest that repeated training in a virtual context can improve specific aspects of functioning on real, life everyday tasks. Further, according to the goal-control framework, repeated practice reduces the decay of the task goal, as represented by greater task accomplishment, but does not improve executive control over the task execution. Important future directions are to determine if people with different cognitive profiles will demonstrate different benefits from VK training and to examine if virtual training of personally relevant, everyday tasks can promote independent living and improve quality of life.
Improvements in treatment for non-CNS cancer have greatly improved survivorship, allowing increased attention to cancer- and treatment-related sequelae. Cognitive symptoms (cancer-related cognitive impairment, or CRCI) are reported by a large percentage of cancer survivors, and can have a clinically meaningful impact on educational, vocational, and social functioning, and thus overall quality of life. Better understanding of these concerns is therefore of critical importance, and is needed to guide treatment and potential prevention strategies. Neuropsychological studies over the past 40 years have demonstrated cognitive domains commonly affected in cancer patients treated with chemotherapy, but have also shown cognitive differences in patients not treated with systemic therapy and those receiving other types of treatment (e.g., hormonal therapies) relative to non-cancer control groups. More recently, structural and functional neuroimaging research has added to our understanding of the neural substrate of these cognitive symptoms. This course will describe various neuroimaging modalities used to investigate CRCI, including examination of grey and white matter volume and structural integrity, blood flow, brain activation during cognitive processing and at rest, and structural and functional connectivity. The presentation will also review how neuroimaging findings relate to objective and self-reported cognition and clinical and treatment factors, and discuss potential approaches currently being investigated to treat CRCI. Upon conclusion of this course, learners will be able to:
1. Explain commonly affected cognitive domains after non-CNS cancer and treatment
2. Discuss structural and functional brain changes related to cancer, chemotherapy, and other treatments
3. Describe treatment interventions being investigated to treat cancer- and treatment-related cognitive symptoms.
There is a wide variability in the neuropsychiatric presentation of mild traumatic brain injury (mTBI), and accurate diagnosis and treatment is complicated by within-condition heterogeneity and overlapping symptoms of common comorbidities (e.g., PTSD). Such diagnostic complexities can obfuscate clinical decision-making and lead to suboptimal treatment response. In contrast to traditional diagnostic categories, person-centered analysis methods create data-derived groupings wherein individuals within a cluster are similar and individuals across clusters are different. The current study sought to apply clustering to dimensional emotional and neuropsychological features in treatment-seeking Veterans with mTBI, with the goal of identifying more precise, homogeneous clinical profiles.
Participants and Methods:
Study participants were 190 Veterans with mTBI history participating in a clinical neuropsychological assessment of cognitive complaints (Mean age: 34.38, 89.6% male, average years of education: 13.14). Participants completed a diagnostic interview, neuropsychological tests, and symptom questionnaires (NBSI, PCL, BDI, BAI, AUDIT, PSQI). To identify clusters of similar neuropsychiatric presentations, we first conducted dimension reduction on data from the cognitive tests and self-report measures using principal components analysis. Second, cluster analysis and cluster validation was performed on the resultant principal components (R: kmeans, clusterboot, clusterValid) to find homogeneous subgroups of participants.
Results:
The clinical data was best represented by principal components reflecting anxious arousal, depressive cognitions, somatic post-concussive symptoms, reexperiencing and avoidance symptoms, and objective cognitive deficits. Cluster analysis using bootstrapping and cluster validity indices (e.g., Silhouette width, Dunn index) indicated that a 6-subgroup solution was optimal (subgroups were labeled Group A-Group F). Group A was characterized by moderate levels across all dimension scores. Group B was characterized by elevated somatic post-concussive symptoms and cognitive deficits. Group C was characterized by intact cognitive performance and low somatic post concussive symptoms. Group D was characterized by elevated depressive cognitions. Group E was characterized by high anxious arousal but low depressive cognitions and reexperiencing and avoidance. Group F was characterized by elevated reexperiencing and avoidance. The subgroups did not differ statistically on any demographic items, such as years of education, age, or gender. However, there were statistically significant differences across groups in performance validity failure (x2(10) = 27.17, p=.002); Group B showed the highest rate of failure.
Conclusions:
Results demonstrate that phenotypically similar subgroups of individuals can be identified within treatment-seeking Veterans with mTBI. Data suggest that somatic post-concussive symptoms may be linked to cognitive deficits, however the rate of validity failure indicates that neuropsychological test scores may not reflect true cognitive ability. In contrast to prior studies that treat mTBI as a unitary construct that accounts for symptoms, our data suggest that a nuanced evaluation yields vastly diverse clinical presentations. Cluster analytic frameworks hold promise for better assessment and treatment planning for Veterans, as both patients and their treating clinicians would be greatly served by the ability to use common clinical assessment tools to better identify a given individual’s clinical needs. A critical next step is to validate subgroups using novel samples and data sources (e.g., neurobiology, genetics) and to determine if these subgroupings can be effectively utilized to personalize treatment assignment.
Individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit deficits in reward-based learning, which have important implications for behavioral regulation. Prior research has shown that these individuals show altered patterns of risky decision-making, which may be partially explained as a function of dysfunctional reactivity to rewards and punishments. However, research findings on the relationships between ADHD and punishment sensitivity have been mixed. The current study used the Balloon Analog Risk Task (BART) to examine risky decision-making in adults with and without ADHD, with a particular interest in characterizing the manner in which participants react to loss.
Participants and Methods:
612 individuals (Mage = 31.04, SDage = 78.77; 329 females, 283 males) were recruited through the UCLA Consortium for Neuropsychiatric Phenomics (CNP). All participants were administered the Structured Clinical Interview for DSM-IV-TR (SCID-IV), which provided diagnoses used for group comparisons between adults with ADHD (n = 35) and healthy controls (n = 577). A computerized BART paradigm was used to examine impulsivity and risky decision-making, while participants also completed the Barratt Impulsiveness Scale (BIS-11), and ADHD participants completed the Adult Self-Report Scale-V1.1 (ASRS-V1.1). The BART presented two colors of balloons with differing probabilities of exploding, and participants were incentivized to pump the balloons as many times as possible without causing them to explode. The primary endpoint was "mean adjusted pumps", determined as mean across trials of the number of pumps on trials that did not end in explosion. An index of reactivity to loss was calculated as the difference between the mean adjusted pumps following an explosion and the mean adjusted pumps following trials in which the balloon did not explode.
Results:
The ADHD and control groups did not differ on mean adjusted pumps across trials, but they did differ in their reactivity to explosion of balloons that followed the most pumps, incurring the greatest level of loss (F(1, 551) = 7.1, p < 0.01). Interestingly, ADHD participants showed a greater reactivity to loss on these balloons than controls (p < 0.05), indicating that they reduced their number of pumps following balloon explosions more than controls. For participants as a whole, there were small correlations between loss reactivity and scales of everyday impulsivity on the BIS-II (ps < 0.05). For ADHD participants, loss reactivity was unrelated to symptoms of inattention but was significantly correlated with symptoms of hyperactivity/impulsivity (p = 0.01) and total ADHD symptoms (p < 0.05) on the ASRS-V1.1.
Conclusions:
In the context of a risky decision-making task, adults with ADHD showed greater reactivity to loss than controls, despite showing comparable patterns of overall performance during the BART. The magnitude of behavioral adjustment following loss was correlated with symptoms of hyperactivity/impulsivity in adults with ADHD, suggesting that loss sensitivity is clinically related to impulsive behavior in everyday life. These findings help to expand our understanding of motivational processing in ADHD and suggest new insight into the ways in which everyday symptoms of ADHD are related to sensitivity to losses and punishments.
Recent work has shown that dysfunctional brain EEG responses to anesthetic drugs can be an indicator of both preoperative cognitive impairment and postoperative delirium risk. However, since excessive anesthetic dosage can also cause abnormal EEG brain responses, it is unclear how to tell to what extent such abnormal brain EEG responses reflect latent neurocognitive impairment versus excessive anesthetic dosage. Further, it is unclear what underlying mechanisms might underlie the link between phenotypes (such as delirium and cognitive impairment) and these abnormal neurophysiologic responses to anesthetic drugs.
Participants and Methods:
Dual center prospective cohort design. 139 total older surgical patients from two academic centers underwent intraoperative EEG monitoring with the bispectral index (BIS) EEG monitor during anesthesia and surgery, and postoperative delirium screening by geriatrician interview (Duke cohort) or by trained research staff (Mt Sinai cohort). We developed the Duke Anesthesia Resistance Scale (DARS), defined as the average BIS EEG values divided by the quantity 2.5 minus the age adjusted end tidal anesthetic gas concentration). We then examined the relationship between the DARS and postoperative delirium risk using the Youden index to identify an optimal low DARS threshold for delirium risk, and we used multivariable logistic regression to control for potential confounders.
Results:
Neither BIS scores nor inhaled anesthetic dosage differed significantly between patients with vs without postoperative delirium. Yet, patients with delirium had lower DARS scores than those who did not develop delirium (27.92 vs 32.88, p=0.015). A DARS threshold of 28.7 maximized the Youden index for the association between the DARS and delirium. In multivariable models adjusting for site (Duke vs Mt Sinai) and individual patient risk factors, DARS values <28.7 were associated with a 3.79 fold increased odds ratio (95% CI 1.63-9.10; p=0.03) for postoperative delirium. These results remained unchanged after adjusting for intraoperative medications including opioids, benzodiazepines, propofol, phenylephrine and ketamine. Patients with structural/functional MRI or CSF biomarker evidence of preclinical/prodromal Alzheimer's disease and/or neurovascular pathology were more likely to show altered anesthetic-induced EEG activity patterns.
Conclusions:
Lower scores on a processed EEG-based scale of neurophysiologic resistance to anesthetic induced brain activity changes were independently associated with a nearly 4 fold increased delirium risk. The altered anesthetic-induced brain EEG patterns in patients who go on to develop postoperative delirium may reflect latent pre-clinical/pro-dromal Alzheimer's disease and/or neurovascular pathology.
A recent review called for a more robust assessment of cannabis use (CU), including amount and timing of recent use to assess neurocognitive effects of CU among people living with HIV (PWH) (Ellis et al., 2021). The current study addresses some issues raised by investigating between group neurocognitive differences among healthy controls and PWH who differ on their cannabis use histories, using strict inclusion criteria, robust classification of CU, and administration of an established neurocognitive test battery.
Participants and Methods:
Among this community sample of adults (N=309), 58 were classified as CU+/HIV+ group (84.5% Male), 76 as CU-/HIV+ (57.9% M), 86 as CU+/HIV- (58.1% M), and 89 as CU-/HIV- (53.9% M). Exclusion criteria included history of past 12-month dependence and extensive lifetime dependence or significant use of illicit substances other than cannabis, severe or current mood or thought disorder, and other medical conditions that adversely impact neurocognitive functioning. Inclusion criteria for CU+ groups included <30-days since last CU, >10 times of CU in last month, 3 times of CU per month in last 12 months, > 1 year of CU, and > 500 times used in lifetime. CU parameters did not statistically differ between HIV+/CU+ and HIV-/CU+. CU- groups’ inclusion criteria required no CU in last 6 months, 196 lifetime number of times used, and no history of CU dependance. Lifetime CU did not statistically differ between CU-/HIV+ and CU-/HIV- groups. HIV+ groups did not differ significantly on HIV viral load in plasma or nadir CD4+ counts. Significant between group differences included age, sex, years of education, and amount of alcohol and nicotine use within 12 months. The aforementioned sociodemographic and substance use variables that differed between groups were covariates in analyses. A battery of 10 neurocognitive measures, two measures per each domain of learning, memory, motor, executive functioning, and processing speed. Global composite summary scores for overall neurocognitive performance were calculated by averaging M T-scores for each neurocognitive domain. Data transformations were used to address any violations of statistical assumptions.
Results:
To facilitate data reduction, neurocognitive task scores were standardized to T-scores using the M and SD of the CU-/HIV-group. An omnibus model of between-group comparisons on global neurocognitive task performance revealed no significant differences, F(3) = .16, p = .923. Subsequent Tukey’s post hoc test revealed no significant differences among the four groups. Results also revealed nonsignificant differences between groups in neurocognitive performance within each domain. However, the CU-/HIV- group performed significantly worse than the CU-/HIV+ group on the Executive Functioning domain, based on Tukey’s post hoc test.
Conclusions:
We found no significant global neurocognitive differences among groups; however, there was some evidence for domain-specific neurocognitive differences in executive functioning. This contrasts somewhat with existing literature on HIV and cannabis-associated neurocognitive deficits. Several factors may have contributed to this, including our relatively healthy PWH sample. Future analyses will examine interactive effects of HIV severity and severity of CU on neurocognition. This analysis will better determine who, among PWH, are most at-risk for cannabis-associated neurocognitive effects and what factors may exacerbate them.
Long-term forgetting rates may be more sensitive for detecting memory decrements compared to short-delay memory assessments (e.g., after 20-30 minutes). To date, much research has been performed on accelerated long-term forgetting (ALF) in epilepsy patients, but research in other patient groups is lacking. ALF may be promising in the field of cerebrovascular disease, as many of these patients experience cognitive complaints, yet do not show impaired performances on neuropsychological assessments.
Participants and Methods:
Here, I will present empirical findings on ALF in individuals after a TIA/minor stroke (n=30) and after stroke (n=91) using short- (20-30 min) and long-delay (1-week) memory testing.
Results:
After TIA/minor stroke, short-delay (2030 min) memory testing was unimpaired, but 1-week delayed testing showed an impaired performance compared to stroke-free controls. In the stroke group, ALF was present in 17% of the patients, compared to stroke-free controls, but more prevalent than rapid forgetting after short-delay memory testing.
Conclusions:
ALF is present in patients with cerebrovascular disease, despite normal acquisition rates. The relation with neuroimaging findings and the clinical relevance of these results will be discussed.
Obesity is associated with adverse effects on brain health, including increased risk for neurodegenerative diseases. Changes in cerebral metabolism may underlie or precede structural and functional brain changes. While bariatric surgery is known to be effective in inducing weight loss and improving obesity-related medical comorbidities, few studies have examined whether it may be able to improve brain metabolism. In the present study, we examined change in cerebral metabolite concentrations in participants with obesity who underwent bariatric surgery.
Participants and Methods:
35 patients with obesity (BMI > 35 kg/m2) were recruited from a bariatric surgery candidate nutrition class. They completed single voxel 1H-proton magnetic resonance spectroscopy at baseline (pre-surgery) and within one year post-surgery. Spectra were obtained from a large medial frontal brain region. Tissue-corrected absolute concentrations for metabolites including choline-containing compounds (Cho), myo-inositol (mI), N-acetylaspartate (NAA), creatine (Cr), and glutamate and glutamine (Glx) were determined using Osprey. Paired t-tests were used to examine within-subject change in metabolite concentrations, and correlations were used to relate these changes to other health-related outcomes, including weight loss and glycemic control.
Results:
Bariatric surgery was associated with a reduction in cerebral Cho (f[34j = -3.79, p < 0.001, d = -0.64) and mI (f[34] = -2.81, p < 0.01, d = -0.47) concentrations. There were no significant changes in NAA, Glx, or Cr concentrations. Reductions in Cho were associated with greater weight loss (r = 0.40, p < 0.05), and reductions in mI were associated with greater reductions in HbA1c (r = 0.44, p < 0.05).
Conclusions:
Participants who underwent bariatric surgery exhibited reductions in cerebral Cho and mI concentrations, which were associated with improvements in weight loss and glycemic control. Given that elevated levels of Cho and mI have been implicated in neuroinflammation, reduction in these metabolites after bariatric surgery may reflect amelioration of obesity-related neuroinflammatory processes. As such, our results provide evidence that bariatric surgery may improve brain health and metabolism in individuals with obesity.
This study evaluated the relation between five-factor model (FFM) personality traits and intra-individual variability (IIV) in executive functioning (EF) using both subjective self-report and objectives measures of EF.
Participants and Methods:
165 university participants (M=19 years old, SD=1.3; 55.2% White, 35.2% African American, 72.7% female) completed the Barkley Deficits in Executive Functioning Scale-Long Form (BDEFS), IPIP-NEO Personality Inventory, Trail-Making Test (TMT) Parts A and B, and the Neuropsychological Assessment Battery (NAB) EF module. A participant’s IIV was calculated as the standard deviation around their own mean performance. Objective EF IIV was computed from T-scores for performance on Trails A, Trails B, and the NAB EF module. Subjective EF IIV was computed from T-scores for performance across BDEFS domains.
Results:
Pearson r correlations were used to evaluate the relation between subjective and objective IIV and FFM traits of personality. Subjective EF IIV was positively correlated with FFM neuroticism [r=.48; p<.001] and negatively correlated with FFM conscientiousness [r=-.43; p<.001], extraversion [r=-.18; p=.023] and agreeableness [r=-.22; p=.004]. There were no significant associations between FFM traits and objective EF IIV performance. There was additionally no significant relation between subjective EF IIV performance and objective EF IIV.
Conclusions:
Personality traits were associated with individual variability on a self-reported measure of EF but not on performance-based EF measures. These results suggest that IIV for the BDEFS was influenced by personality traits, particularly neuroticism and conscientiousness, and may reflect method variance. It was notable that IIV was not correlated between subjective and objective EF measures.
While attention-deficit/hyperactivity disorder (ADHD) symptoms, including inattention, hyperactivity, and impulsivity, are normally distributed within the population, features of ADHD have been associated with poor functional outcomes across various domains of life, such as academic achievement and occupational status. However, some individuals with even strong ADHD features show normal or above-average success within these functional domains. Executive dysfunction and emotion regulation abilities are associated with educational attainment and occupational status and may therefore explain some of the heterogeneity in functional outcomes in individuals with mild, moderate, and high levels of ADHD symptoms. In this study, we investigated whether emotion regulation strategy use (i.e., emotion suppression or cognitive reappraisal) and executive function abilities moderate the relationship between ADHD symptoms and occupational status and education attainment in adults.
Participants and Methods:
Data were collected from 109 adults aged 18 - 85 (M = 38.08, SD = 15.54; 70.6% female) from the Nathan Kline Institute Rockland Sample. All participants completed measures of ADHD symptoms (Conners Adult ADHD Rating Scale), emotion regulation strategy use (Emotion Regulation Questionnaire), and executive functioning (composite scores of inhibition, shifting and fluency from the standardized Delis-Kaplan Executive Function System). In this study, executive function abilities and emotion regulation strategy use were tested as potential moderators of the relationship between ADHD symptoms and functional outcomes using hierarchical regression models.
Results:
Several two- and three-way interactions predicting occupational status and educational attainment were observed. Education attainment was predicted by hyperactivity and reappraisal (ß = -0.26, p = .006); inattention, shifting, and reappraisal (ß = -0.52, p = .029); inattention, shifting, and suppression (ß = -0.40, p = .049); inattention, fluency, and reappraisal (ß = 0.24, p = .038); hyperactivity, fluency, and reappraisal (ß = 0.27, p = .034); and impulsivity, fluency, and reappraisal (ß = 0.44, p = .004). Occupational status was predicted by inattention and reappraisal, (ß = -0.27, p = .032), hyperactivity and reappraisal (ß = -0.26, p = .004); and impulsivity, fluency, and reappraisal (ß = 0.35, p = .031). Fluency was positively associated with educational attainment when controlling for inattention and impulsivity.
Conclusions:
Consistent with the hypothesis, the association between ADHD symptoms and both occupational status and educational attainment were moderated by the interaction between emotion regulation strategy use, executive function abilities domains. The observed interactions suggest that both occupational status and educational attainment may depend heavily on one’s intrinsic abilities and traits. Contrary to previous literature, we found no evidence that ADHD symptoms, emotional regulation strategies were independently associated with either educational attainment or occupational status, but this should be validated in a sample with greater representation of adults with clinically significant ADHD.