To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Emerging evidence suggests that individuals recovering from COVID-19 perceive changes to their cognitive function and psychological health that persist for weeks to months following acute infection. Although there is a strong relationship between initial COVID-19 infection severity and development of prolonged symptoms, there is only a modest relationship between initial COVID-19 severity and self-reported severity of prolonged symptoms. While much of the research has focused on more severe COVID-19 cases, over 90% of COVID-19 infections are classified as mild or moderate. Previous work has found evidence that non-severe COVID-19 infection is associated with cognitive deficits with small-to-medium effect sizes, though patients who were not hospitalized generally performed better on cognitive measures than did those who were hospitalized for COVID-19 infection. As such, it is important to also quantify subjective cognitive functioning in non-severe (mild or moderate) COVID-19 cases. Our meta-analysis examines self-reported cognition in samples that also measured objective neuropsychological performance in individuals with non-severe COVID-19 infections in the post-acute (>28 days) period.
Participants and Methods:
This study’s design was preregistered with PROSPERO (CRD42021293124) and used the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) checklist for reporting guidelines. Inclusion criteria were established prior to article searching and required peer-reviewed studies to have (1) used adult participants with a probable or documented diagnosis of non-severe (asymptomatic, mild, or moderate) COVID-19 who were in the post-acute stage (>28 days after initial infection); (2) used objective neuropsychological testing to document cognitive functioning; and (3) include a self-report measure of subjective cognition. At least two independent reviewers conducted all aspects of the screening, reviews, and extraction process. Twelve studies with three types of study design met full criteria and were included (total n=2,744).
Results:
Healthy comparison group comparison: Compared with healthy comparison participants, the post-COVID-19 group reported moderately worse subjective cognition (d=0.546 [95% CI (0.054, 1.038)], p=0.030). Severity comparison: When comparing hospitalized and not hospitalized groups, patients who were hospitalized reported modestly worse subjective cognition (d=-0.241, [95% CI (-0.703, 0.221)], p=0.30), though the difference was not statistically significant. Normative data comparison: When all non-severe groups (mild and moderate; k=12) were compared to the normative comparison groups, there was a large, statistically significant effect (d=-1.06, [95% CI (-1.58, -0.53)], p=0.001) for self-report of worse subjective cognitive functioning.
Conclusions:
There was evidence of subjective report of worse cognitive functioning following non-severe COVID-19 infection. Future work should explore relationships between objective neuropsychological functioning and subjective cognitive difficulties following COVID-19.
Dysnomia may be one of the earlier neuropsychological signs of Alzheimer’ disease (Cullum & Liff, 2014), making it an essential part of dementia evaluations. The Verbal Naming (VNT) is a verbal naming-to-definition task designed to assess possible dysnomia in older adults (Yochim et al., 2015) and has been used as an alternative to tasks that predominately rely on picture-naming paradigms. These researchers investigated the influences of age, educational level, cognitive diagnosis, educational quality, and race to examine if race would be a remaining significant factor in the performance of the VNT.
Participants and Methods:
Black (n=57) and White (n=127) participant data were collected during clinical neuropsychological evaluations, which included the VNT alongside other cognitive measures. A multiple regression was utilized controlling for age, educational level, cognitive diagnosis, educational quality via reading level, and race to investigate if race would remain a significant predictor of test performance.
Results:
Results suggested that race was still a significant predictor (p = .003) of VNT scores despite efforts to control other sources of variance. Additionally, other cognitive measures such as WAIS-IV Block Design (p = .004) and D-KEFS Tower Test (p = .004) also showed statistically significant relationships with race in the same model, whereas verbal memory (CVLT) and verbal fluency (D-KEFS) did not. The NAB Naming analysis violated the assumption of homoscedasticity; therefore, results with the NAB Naming test were not further interpreted.
Conclusions:
These results suggest that race is a significant predictor of performance on some cognitive measures, including the VNT. However, it did not predict performance on verbal memory or verbal fluency. Future investigations of racial differences on neuropsychological test performance would benefit from consideration of variables that may account for discrepancies between White and Black examinees. Several proxy variables could include educational quality, acculturation, and economic status.
Executive functions (EFs) are considered to be both unitary and diverse functions with common conceptualizations consisting of inhibitory control, working memory, and cognitive flexibility. Current research indicates that these abilities develop along different timelines and that working memory and inhibitory control may be foundational for cognitive flexibility, or the ability to shift attention between tasks or operations. Very few interventions target cognitive flexibility despite its importance for academic or occupational tasks, social skills, problem-solving, and goal-directed behavior in general, and the ability is commonly impaired in individuals with neurodevelopmental disorders (NDDs) such as autism spectrum disorder, attention deficit hyperactivity disorder, and learning disorders. The current study investigated a tablet-based cognitive flexibility intervention, Dino Island (DI), that combines a game-based, process-specific intervention with compensatory metacognitive strategies as delivered by classroom aides within a school setting.
Participants and Methods:
20 children between ages 6-12 years (x̄ = 10.83 years) with NDDs and identified executive function deficits and their assigned classroom aides (i.e., “interventionists”) were randomly assigned to either DI or an educational game control condition. Interventionists completed a 2-4 hour online training course and a brief, remote Q&A session with the research team, which provided key information for delivering the intervention such as game-play and metacognitive/behavioral strategy instruction. Fidelity checks were conducted weekly. Interventionists were instructed to deliver 14-16 hours of intervention during the school day over 6-8 weeks, divided into 3-4 weekly sessions of 30-60 minutes each. Baseline and post-intervention assessments consisted of cognitive measures of cognitive flexibility (Minnesota Executive Function Scale), working memory (Weschler Intelligence Scales for Children, 4th Edn. Integrated Spatial Span) and parent-completed EF rating scales (Behavior Rating Inventory of Executive Function).
Results:
Samples sizes were smaller than expected due to COVID-19 related disruptions within schools, so nonparametric analyses were conducted to explore trends in the data. Results of the Mann-Whitney U test indicated that participants within the DI condition made greater gains in cognitive flexibility with a trend towards significance (p = 0.115. After dummy coding for positive change, results also indicated that gains in spatial working memory differed by condition (p = 0.127). Similarly, gains in task monitoring trended towards significant difference by condition.
Conclusions:
DI, a novel EF intervention, may be beneficial to cognitive flexibility, working memory, and monitoring skills within youth with EF deficits. Though there were many absences and upheavals within the participating schools related to COVID-19, it is promising to see differences in outcomes with such a small sample. This poster will expand upon the current results as well as future directions for the DI intervention.
Fatigue, which can be classified into physical and cognitive subtypes (Schiehser et al., 2012), is a common non-motor symptom in persons with Parkinson’s disease (PD) that has no clear treatment. Cognitive changes, also common in PD (Litvan et al., 2012), may impact how patients perceive fatigue (Kukla et al., 2021). Grit is a personality trait defined as perseverance and passion towards a long-term goal, and is associated with multiple positive outcomes such as lower fatigue levels in healthy individuals (Martinez-Moreno et al., 2021). However, scarce research has examined the relationship between grit and fatigue in persons with PD. Therefore, we aimed to investigate the relationship between fatigue (cognitive and physical) and grit, as well as the impact of cognitive status (i.e., cognitive normal vs. mild cognitive impairment [MCI]) on this relationship in non-demented individuals with PD.
Participants and Methods:
Participants were 70 non-demented individuals with PD who were diagnosed as either cognitively normal (n=20) or MCI (n=50) based on Level II of the Movement Disorder Society PD-MCI criteria. Participants completed the Modified Fatigue Impact Scale (MFIS), which consists of two subscales (cognitive and physical fatigue) that are combined for a total overall fatigue score. Participants also completed the Grit Scale, which consists of items such as ambition, perseverance, and consistency. ANOVAs were conducted to determine differences in grit between PD-cognitively normal vs PD-MCI groups. Correlations and multiple hierarchical regressions controlling for significant demographics (i.e., age, education, sex), mood (i.e., depression, anxiety) and disease variables (i.e., disease duration, Levodopa equivalent dosage) with backwards elimination were conducted to evaluate the relationship between grit and fatigue (MFIS total score and MFIS cognitive and physical fatigue subscales).
Results:
There was no significant difference in grit total scores between PD patients who were cognitively normal or MCI (p = .336). Higher grit total scores predicted lower MFIS total (ß = -.290, p = .005) and lower cognitive fatigue (ß = -.336, p < .001) scores in the total sample, above and beyond relevant covariates as well as cognitive status. Grit scores were not significantly associated with physical fatigue (ß = -.206, p = .066). Furthermore, cognitive status was not a significant predictor of fatigue scores in any of the models (all p’s > .28).
Conclusions:
Findings indicate that higher levels of grit are associated with lower levels of fatigue, specifically cognitive fatigue, in individuals with PD. These results held true for those who were cognitively normal or with MCI, suggesting that grit may impact fatigue in non-demented PD patients regardless of cognitive status. These findings underscore the importance of considering grit when assessing or treating fatigue, particularly cognitive fatigue, in persons with PD.
Dementia prevalence and its costs to the health system continue to rise, highlighting the need for comprehensive care programs. This study evaluates the Care Ecosystem Program (CE) for dementia (memory.ucsf.edu/Care-Ecosystem) in New Orleans, LA and surrounding areas.
Participants and Methods:
The sample consisted of persons with dementia (PWD) and caregiver (CG) dyads enrolled in the CE from February-2019 to June-2022. Participants had a dementia diagnosis, lived in the community, and had at least one emergency department (ED) visit or hospitalization in the year prior. Healthcare utilization data was collected through self-report and electronic medical records. Dementia rating scales (QDRS, NPIQ) and caregiver wellbeing questionnaires (ZBI-12; PHQ-9; Self-Efficacy) were collected at baseline, 6-months, and 12-months. Dyads received monthly calls providing individualized care-management. One-way repeated measures Anovas were performed to identify change in utilization and caregiver wellbeing at 6-months and 12-months compared to baseline. Partial n2 effect sizes and post-hoc Bonferroni were calculated. Healthcare utilization extreme outliers were winsorized to the 95th percentile and a p-value of .05 was set.
Results:
A total of 150 dyads completed the program. PWD's age averaged 81 years (SD=8); they were mostly female (65%), White (63%), and had at least a High School education or higher (88%). CG's age averaged 65 years (SD=11.5); they were predominantly female (77%), White (63%), and had more than 12-years of education (70%). Half of the CGs were adult children (50%), followed by spouse/partners (41%). The QDRS indicated mild-moderate dementia severity, PWD had on average five neuropsychiatric symptoms, and Alzheimer's Disease was the most frequent diagnosis (35%).
A statistically significant decrease occurred in ED visits [F(1, 115)=14.970, p<.001, n2=.115] from baseline to 6-months (MD=1.043, p<.001) and 12-months (MD=.621, p<.001), while an increase was noted when comparing 12-month to 6-month data (MD=.422, p<.001). A similar pattern was observed for hospitalizations [F(1,115)=19.021, p<.001, n2=.142] were admissions were reduced significantly compared to baseline (6-month MD=.483, p<.001; 12-month MD=3.88, p<.001) and an increase was seen after the 6-month mark (MD=.095, p<.001). Caregiver self-efficacy significantly improved [F(1,115)=15.478, p<.001, n2=.119] from baseline to 6-months into the CE (MD=-1.457, p<.001) and was maintained a year after enrollment (MD=-1.474, p<.001). There were no differences in self-efficacy when comparing 6-month and 12-month data. Robust effect sizes were noted for all results previously reported. No other caregiver wellbeing measures showed significant changes over the three time points.
Conclusions:
CE successfully reduces healthcare utilization and improves caregiver self-efficacy for PWD-CG dyads 6-months and 12-months after enrollment. The utilization increase noted from the 6-month to the 12-month mark does not surpass baseline rates. This pattern is also consistent with literature reporting that healthcare utilization rises with the progression of dementia. More research is needed to identify potential moderating factors in the relationship between dementia progression and utilization. Future research will also benefit from including control groups to further understand the impact of comprehensive care programs for dementia.
Choice response time (RT) increases linearly with increasing information uncertainty, which can be represented externally or internally. Using a card-sorting task, we previously showed that Alzheimer’s disease (AD) dementia patients were more impaired relative to cognitively normal older adults (CN) under conditions that manipulated internally cued rather than externally driven uncertainty, but this study was limited by a between-subjects design that prevented us from directly comparing the two uncertainty conditions. The objective of this study was to assess internally cued and externally driven cued uncertainty representations in CN and mild cognitive impairment (MCI) patients.
Participants and Methods:
Older participants (age > 60 years; N=49 CN, N=33 MCI patients) completed a card-sorting task that separately manipulated externally cued uncertainty (i.e., the number of sorting piles with equal probability of each stimulus type) or internally cued uncertainty (i.e., the probability of each stimulus type with fixed number of sorting piles) at three different uncertainty loads (low, medium, high). Exploratory analyses separated MCI patients by etiology into possible/probable cortical neurodegenerative process (i.e., AD, frontotemporal dementia; N=13) or nonneurodegenerative process (i.e., vascular, psychiatric, sleep, medication effect; N=20).
Results:
CN and MCI patients maintained a high level of accuracy on both tasks (M accuracy > .94 across conditions). MCI patients performed more slowly than CN on the externally and internally cued tasks, and both groups showed a significant positive association between uncertainty load and RT (p’s < .05). There was a group x load x uncertainty condition interaction (p = .05). For CNs, the slope of the linear association between load and RT was significantly steeper in the externally cued compared to internally cued condition. For MCI patients in contrast, RTs increased with load to a similar degree in both conditions. Exploratory analyses showed the MCI-neurodegenerative patients were significantly slower than MCI-nondegenerative and CN (p < .001). While the group x load x condition interaction was significant when comparing all three groups (p < .05), this was driven by the differences between CN and MCI patients described above; the MCI-neurodegenerative and non-neurodegenerative groups did not significantly differ in the strength of the RT-load association between the externally or internally cued conditions.
Conclusions:
Overall, CN participants showed greater RT slowing with increasing load of externally driven than internally cued uncertainty. Though they were slower than CNs, MCI patients (even those with a possible/probable cortical neurodegenerative condition) were able to accurately perform an internally cued uncertainty task and did not show differential slowing compared to an externally driven task. This provides preliminary evidence that internal representations of probabilistic information are intact in patients with MCI due to a neurodegenerative condition, meaning they may not depend on cortical processes. Future work will increase the sample sizes of the MCI-neurodegenerative and non-degenerative groups.
Drawing on the National Alzheimer’s Coordinating Center (NACC) Uniform Data Set (UDS), this study aimed to investigate the direct and indirect associations between vascular risk factors/cardiovascular disease (CVD), pharmacological treatment (of CVD), and white matter hyperintensity (WMH) burden on overall cognition and decline trajectories in a cognitively diverse sample of older adults.
Participants and Methods:
Participants were 1,049 cognitively diverse older adults drawn from a larger NACC data repository of 22,684 participants whose data was frozen as of December 2019. The subsample included only participants who were aged 60-97 (56.7% women) who completed at least one post-baseline neuropsychological evaluation, had medication data, and both T1 and FLAIR neuroimaging scans. Cognitive composites (Memory, Attention, Executive Function, Language) were derived factor analytically using harmonized data. Baseline WMH volumes were quantified using UBO Detector. Baseline health screening and medication data was used to determine overall CVD burden and total medication. Longitudinal latent growth curve models were estimated adjusting for demographics.
Results:
More CVD medication was associated with greater CVD burden; however, no direct effects of medication were found on any of the cognitive composites or WMH volume. While no direct effects of CVD burden on cognition (overall or rate of decline) were observed, instead we found that greater CVD burden had small, but significant, negative indirect effects on Memory, Attention, Executive Functioning and Language (all p’s < .01) after controlling for CVD medication use. Whole brain WMH volume served as the mediator of this relationship, as it did for an indirect effect of baseline CVD on 6-year rate of decline in Memory and Executive function.
Conclusions:
Findings from this study were generally consistent with previous literature and extend extant knowledge regarding the direct and indirect associations between CVD burden, pharmacological treatment, and neuropathology of presumed vascular origin on cognitive decline trajectories in an older adult sample. Results reveal the subtle importance of CVD risk factors on late life cognition even after accounting for treatment and WHM volume and highlight the need for additional research to determine sensitive windows of opportunity for intervention.
Parkinson’s disease (PD) affects the person’s quality of life, but the comorbidity of PD and impulsive control disorder (ICD), which has an average prevalence of 23%, can enhance the disruption of quality of life for the patients and their caregivers. The effects of ICD in PD on brain morphology and cognition have been little studied. Thus, this study proposes to investigate the differences in the evolution of cognitive performance and brain structures between PD patients with ICD (PD-ICD) vs. without ICD (PD-no-ICD).
Participants and Methods:
Parkinson’s Progression Markers Initiative (PPMI) data of 58 patients with idiopathic PD, including their MRI data at baseline and three years later, were analyzed. The MRIs were processed with FreeSurfer (7.1.1) to extract cortical volumes, areas, thicknesses, curvatures and folding index as well as volumes of subcortical segmentations. All participants underwent cognitive evaluations. The Questionnaire for Impulsive-Compulsive Disorders in Parkinson’s Disease was used to differentiate those with at least one ICD from those without any ICD. 12 of the 58 patients had an ICD at their first visit and 19 had an ICD at their visit three years later. There was no significant difference between PD-ICD and PD-no-ICD with respect to sex, use of overall medication, age, age of onset, age at diagnosis, years of education and the Montreal cognitive assessment score. Two-way mixed ANOVAs were performed for each neuropsychological test and brain structure extracted from MRIs with the time of the visit as the repeated independent variable (within participants) and the presence or absence of an ICD as the other independent variable (between participants).
Results:
The mixed ANOVA revealed that PD-ICD had their performance decline after three years, for the Hopkins Verbal Learning Test delayed recall and the Symbol Digit Modalities Test while PD-no-ICD saw their performance increase. A whole brain analysis showed that PD-ICD had a significant decrease after three years of the right cortex area total brain volume in comparison to PD-no-ICD. Specific brain structures also underwent significant changes over three years. Cortical changes in PD-ICD were: (1) increased surface area in the left temporal parahippocampus and (2) decreased surface areas of the right insula, right middle and superior temporal regions, left occipital lingual as well as left cingulate isthmus. Furthermore, in the subcortical nuclei, PD-ICD showed (1) increased volumes of the paratenial thalamic nucleus and whole right amygdala and (2) decreased volumes of the right amygdalian basal nucleus and thalamic ventromedial nucleus.
Conclusions:
This study suggests that PD patients who also have ICD might be prone to develop over three years: (1) significant changes in cognitive performance (memory, attention), (2) morphological changes in the amygdala and thalamic nuclei and (3) significant atrophy and area shrinkage in the temporal and insula regions.
Arachnoid cysts are fluid-filled sacs thought to be a developmental abnormality which form as a result of splitting or duplication of the arachnoid membrane. In most cases, arachnoid cysts are congenital and asymptomatic throughout an individual’s life. Rarely, arachnoid cysts develop because of head injury, intraventricular hemorrhage of prematurity, presence of a tumor, infection or surgery on the brain. Intracranial cysts are typically incidental brain imaging findings and most commonly located in the middle fossa, the suprasellar region, and the posterior fossa. In cases where the cyst enlarges significantly individuals may experience symptoms of increased intracranial pressure, mass effects, seizures, nausea and vomiting, focal neurological deficits, or hydrocephalus. This presentation compares the differing symptom presentation of two individuals with medically confirmed arachnoid cysts -- one in the middle cranial fossa region (Patient A) and the other in the posterior cranial fossa region (Patient B).
Participants and Methods:
The 2 patients were referred to a private practice neuropsychological clinic for neuropsychological assessment. Patient A was a 39-year-old, right-handed, married Syrian male with 12 years if education, unemployed at the time of testing. Changes in cognition, behavior and personality were reported for Patient A approximately two years after a known cerebrovascular accident. Patient B was a 48-year-old, left-handed married Caucasian male with 16 years of education, on disability due to his medical condition. Patient B reported severe memory impairment, speech and language deficits, variable attention, executive dysfunction, impaired gait with falls, emotional dysregulation, and sleep difficulties. He was diagnosed with bipolar disorder and alcohol use disorder in remission for 9 years.
Results:
Neuropsychological testing results for Patient A were not valid, due to initiation difficulties, paranoia about the testing and consequent limited engagement in the process. Predominant symptoms were consistent with negative symptoms of schizophrenia, (i.e., avolition, abulia, and diminished emotional expression); no positive symptoms were observed or reported. His speech was limited -he lacked spontaneous speech and only responded to direct questions. His informant completed a measure assessing pre/post changes in frontal systems and there were significant increases in apathy and executive dysfunction reported. Neuropsychological results collected from Patient B revealed mild to severe impairment of aspects of executive functioning, memory, processing speed, visual attention, expressive language, and manual dexterity bilaterally and manual motor strength - more consistent with subcortical neurological disease. Self-report and informant data revealed significant difficulties with functional abilities, pre/post changes in frontal systems (apathy, disinhibition, and executive dysfunction), sleep efficiency and daytime fatigue, and psychological distress (anxiety and depressive symptoms).
Conclusions:
The presenting case analysis illustrates the importance of neuropsychology in identifying and tracking the nature of symptoms associated with neuroimaging confirmed arachnoid cysts. This case analysis is unique as it highlights the complexities of differing symptom phenotypes of the same condition due to location of the cyst. Surgical intervention usually through draining the cyst directly or implantation of a shunt is typically recommended for symptomatic patients and that course of treatment was suggested to both patients. Treatment recommendations geared to target psychosocial and functional difficulties should also be considered.
Most emotion perception assessments were developed in western societies using English terms and Caucasian faces, so the extent to which they are cross-culturally valid is in question. To sort this, understanding the mechanisms of cultural variations is the key. In the past half-century, cross-cultural differences in perceiving facial emotions have been consistently reported and discussed, advancing knowledge to feed theoretical and practical interests. However, as these studies are heterogeneous in the questions asked and methods used, without understanding their association, we cannot provide a clear answer to the simple question: why do people from different cultures perceive facial emotions differently? This limitation represents a bottleneck for adapting western clinical assessments cross-culturally to suit the increasing trend of globalisation in research and testing. To address this issue, we conducted a systematic review aiming to reveal the effect of culture on emotion perception from past cross-cultural studies on healthy people. We expected this review to bridge findings in basic research and clinical application.
Participants and Methods:
The systematic review followed the framework outlined in Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). We searched five databases using three groups of keywords. We included all peer-reviewed original studies that 1) conducted cross-cultural comparison in facial emotion perception with healthy adults and 2) used a design that allowed identifying specific mechanisms to explain cultural variations.
The qualitative data synthesis included three steps: 1) categorising eligible studies according to the type of cross-cultural differences they investigated, 2) summarising the findings of each cluster, and 3) summarising the mechanisms revealed by the findings.
Results:
We found the 122 eligible articles clustered into five groups that investigated 1) how race and in-group and out-group status affected facial emotion perception; 2) cultural differences in using context to identify facial expressions; 3) cultural differences in emotion conceptualisation and how they affected facial emotion perception; 4) cultural differences in interpreting facial muscle configurations; 5) how culture interacted with the inference making process.
Seven mechanisms underlying cultural variations in facial emotion perception were revealed. These are facial emotion templates, emotion conceptualisation, in/out-group differentiation, information surveying strategies, belief that expressers are independent agents, reliance on the face and other emotion expressing channels, and stereotypes. The relative importance of these factors may depend on the cultures chosen to compare and the situational settings that affect how they work together in real life.
Conclusions:
This review, for the first time, systematically addresses the mechanisms underlying cross-cultural differences in facial emotion perception. Besides advancing knowledge about this rapidly growing area, it guides what needs to be considered when designing new tests, adapting existing tests, and assessing the risk of bias brought about by cross-cultural issues.
A systematic approach is vital for adapting neuropsychological tests developed and validated in western monocultural, educated and English-speaking populations. However, rigorous and uniform methods are often not implemented during adaptation of neuropsychological tests and cognitive screening tools across different languages and cultures. This has serious clinical implications. Our group has adapted the Addenbrooke’s Cognitive Examination (ACE) III for the Bengali speaking population in India. We have taken a 'culture-specific’ approach to adaptation and illustrate this by describing the process of adapting the ACE III naming sub-test, with a focus on the process of selecting culturally appropriate and psychometrically reliable items
Participants and Methods:
Two studies were conducted in seven phases for adapting the ACE III naming test. Twenty-three items from the naming test in the English and the different Indian ACE-R versions were administered to healthy Bengali speaking literate adults to determine image agreement, naming and familiarity of the items. Eleven items were identified as outliers. We then included 16 culturally appropriate items that were semantically similar to the items in the selected ACE-R versions of which 3 were identified as outliers. The final corpus consisting of 24 items was administered to 30 patients with mild cognitive Impairment, Alzheimer’s disease and vascular dementia, and 60 healthy controls matched for age and education to determine which items in the corpus best discriminated patients and the controls, and to examine their difficulty levels.
Results:
The ACE III Bengali naming test with an internal consistency of .76 included 12 psychometrically reliable, culturally relevant high naming-high familiarity and high naming-low familiarity living and non-living items. Item difficulty ranged from .47 to .88 and had discrimination indices >.44.
Conclusions:
A key question for test development/adaptation is whether to aim for culture-broad or culture-specific tests. Either way, a systematic approach to test adaption will increase the likelihood that a test is appropriate for the linguistic/cultural context in which it is intended to be used. Adaptation of neuropsychological tests based on a familiarity driven approach helps to reduce cultural bias at the content level. This coupled with appropriate item selection statistics helps to improve the validity of the adapted tests and ensure cross-cultural comparability of test scores both across and within nations.
People with Korsakoff syndrome (KS) experience severe neuropsychological and neuropsychiatric complications following vitamin B1 deficiency predominantly due to alcoholism. KS often presents itself with neuropsychological symptoms such as problems in episodic memory, executive functioning, and social cognition. Common neuropsychiatric symptoms in KS are disorders of affect, confabulations, anosognosia, and apathy. Apathy can be defined by a pathological lack of goal-directed behaviors, goal-directed cognitions, and goal-directed emotions. Patients with KS have an increased risk of cerebrovascular comorbidity. Cerebrovascular accidents are known to increase the risk for developing apathy. Apathy in KS patients can negatively influence the ability to live an autonomous life, often making 24-hour care a necessity. Limited research on apathy in KS patients has been published to this day. Our aim was to assess apathy in Korsakoff patients with and without neurovascular comorbidity.
Participants and Methods:
General apathy and related subconstructs, such as judgment and decision-making skills, emotional blunting, and the intentions to perform pleasurable activities, were studied in fifteen KS patients, fifteen KS patients with additional cerebrovascular comorbidity, and fifteen healthy controls. The first responsible caregiver of each patient filled in the Apathy Evaluation Scale and Scale for Emotional Blunting. An examiner administered the interview-based Judgement scale of the Neuropsychology Assessment Battery with the KS patients and each KS patient filled in the self-report section of the Pleasurable Activities List. Both KS patient groups receive 24-hour care in a specialized facility for Korsakoff Syndrome.
Results:
Our study found higher levels of general apathy in both KS patient groups, when rated by their caregiver compared to healthy controls. No difference was found between the KS patient groups and the healthy control group on the self-reported section of the Pleasurable Activities List, which might suggest the presence of intrinsic motivation in KS patients. However, a discrepancy was found between the self-reported activity levels and proxy reported levels of apathy. KS patients with cerebrovascular comorbidity showed increased levels of emotional blunting compared to KS patients without cerebrovascular comorbidity and healthy controls. Decreased judgment and decision-making skills were found in both patient groups compared to healthy controls, with no difference found between KS patients with cerebrovascular comorbidity and KS patients without.
Conclusions:
Our findings suggest that people with Korsakoff syndrome experience more general apathy compared to healthy controls. Both patient groups showed decreased judgement and decision-making skills and increased emotional blunting. Intrinsic motivation was found to be intact in KS patients. Experiencing cerebrovascular comorbidity in KS carries a risk for developing emotional blunting. Our findings show that apathy greatly affects people with KS. Future scientific research is warranted to further benefit the care for this complex patient population.
Poor mood and quality of life is common among patients with medically intractable seizures. Many of these patients are not candidates for seizure focus resection and continue to receive standard medical care. Responsive neurostimulation (RNS) has been an effective approach to reduce seizure frequency for nonsurgical candidates. Previous research using RNS clinical trial participants has demonstrated improved mood and quality of life when patients received RNS-implantation earlier in their medically resistant epilepsy work-up (Loring et al., 2021). We aimed to describe the level of depression and quality of life in adults with medical resistant epilepsy, treated with RNS, presenting to an outpatient clinic.
Participants and Methods:
This pilot study was conducted among 11 adult epilepsy patients treated with RNS at the epilepsy specialty clinic at Baylor College of Medicine. Ages of participants ranged from 18-56 (M=32.01, SD=12.37) with a mean education of 12.43 (SD=0.85). The majority of the participants identified as White (White=72.2%; Hispanic/Latino/a=14.3%, Other=7.1%). We also present pre- and post-RNS preliminary results of a subset of 4 patients for whom pre and post implantation data was available. Depression symptoms were assessed through the Beck Depression Inventory, 2nd Edition (BDI-II) and quality of life was determined using the Quality of Life in Epilepsy (QoLiE-31).
Results:
Patients reported minimal symptoms of depression (M=5.45, SD=4.03) and good overall quality of life (M=71.18, SD=14.83) after RNS. Participants’ scores on their overall quality of life ranged from 50 to 95 (100=better quality of life). The QoLiE-31 showed high scores on emotional wellbeing (M=69.45, SD=14.56) and cognitive functioning (M=65.36, SD=16.66) domains. Post-hoc analysis revealed a significant difference in the cognitive functioning domain of QoLiE-31 before (M=44.75, SD=12.58) and after (M=51.0, SD=11.58) RNS implantation(t(3)=-3.78, p=0.016. Additionally, overall QoLiE score approached statistical significance when comparing pre-RNS (M=44.75 SD=9.29) to post-RNS (M=49.75 SD=11.62; t(3)=-2.01, p = 0.069). No significant differences were evident on seizure worry, energy/fatigue, medication effects, and social functioning domains of QoLiE-31 before and after RNS treatment.
Conclusions:
These pilot study results suggest low levels of depression with this population post-RNS implantation. Additionally, there is preliminary evidence to suggest improved patient-rated cognitive functioning and overall quality of life. While this is a small study population, the results have important implications for patients with intractable epilepsy, even with those form who surgical resection may not be possible. Future studies with large enough samples to examine moderating and mediating factors to mood and quality of life changes post-RNS will be important.
Process-based measures of verbal learning, such as the recently described learning ratio (LR; Hammers et al., 2022) may add valuable data to neuropsychological assessment. Women tend to have higher episodic verbal memory ability compared to men at all ages, including older adulthood (Golchert et al., 2019; Maitland et al., 2004). However, it is unclear whether gender is related to the process of learning, as quantified through measures of learning slope and ratio. To date only one study has examined this, with Hammers et al. (2021) finding no gender differences on LR in the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS); therefore, further study is necessary. We examined whether men and women differed in LR, learning over time (LOT), and raw learning slope (RLS) in a healthy older adult sample, as well as whether these learning process variables predicted delayed memory equally for men and women.
Participants and Methods:
203 cognitively healthy community-dwelling adults aged 50 and above (mean age 67.7; 133 women) were taken from a larger archival database; all were administered the RBANS in the context of other studies. LR, LOT, and RLS were calculated from the List Learning task. We examined whether men and women differed in these learning process measures. We then examined whether process measures differentially predicted performance on list recall and delayed memory index (DMI) of the RBANS for men and women.
Results:
Men and women did not differ in age or years of education. After accounting for age and education, there were no gender differences on LR (p=.455) or RLS (p=.502) but LOT was lower in women (p=.013).
LR was equally predictive of list recall across genders (p<.001 for LR; p=.21 for gender). Correlations between LR and list recall were r=.65 (p<.001) for men and r=.56 (p<.001) for women. Both LR (p<.001) and gender (p=.008) predicted DMI but the interaction was nonsignificant. Correlations between LR and DMI were r=.52 for men (p<.001) and r=.46 for women (p<.001).
RLS predicted list recall equally across genders (p<.001 for RLS; p=.07 for gender; p=.18 for interaction). Correlations between RLS and list recall were r=.43 for men (p<.001) and r=.23 for women (p=.008). RLS (p<.001) and gender (p=.002; p=.19 for interaction) predicted DMI scores. Correlations between RLS and DMI were r=.31 for men (p=.008) and r=.21 for women (p=.015).
LOT predicted list recall equally across genders (p<.001; p=.97 for gender; p=.80 for interaction). Correlations between LOT and list recall were r=-.50 for men (p<.001) and r=-.60 for women (p<.001). LOT also predicted DMI equally across genders (p<.001; p=.084 for gender; p=.159 for interaction). Correlations between LOT and DMI were r=-.46 for men (p<.001) and r=-.49 for women (p<.001).
Conclusions:
Of the three process variables, LR was the only one that did not show gender differences and was related to delayed memory outcomes with medium to large effect across both genders. Results suggest that LR can be used consistently across genders. As this sample consisted of healthy, independently-living older adults, future study should examine LR by gender in MCI and dementia samples.
Combat exposure is associated with higher rates of depressive symptoms, including anhedonia (i.e., a reduced ability to seek and experience rewards) and feelings of social disconnectedness. While these symptoms are commonly documented in combat-exposed Veterans following deployment, the cognitive mechanisms underlying this pathology is less well understood. Computational modeling can provides detailed mechanistic insights into complex cognition, which may be particularly useful to understand how social reward processing is altered following combat exposure. Here, we use a Bayesian learning model framework to address this question.
Participants and Methods:
Thirty-three Operation Enduring Freedom (OEF)/ Operation Iraqi Freedom (OIF)/Operation New Dawn (OND) Veterans (25 Male, 8 Female) between the ages of 18-65 years old (M = 41.61, SD = 10.49) participated in this study. In both classic/monetary and social reward conditions, participants completed a 2-arm bandit task, in which they must choose on each trial between two options (i.e., slot machine vs social partner) with unknown reward rates. While they received monetary outcomes in the classic condition, participants received compliments from different fictitious partners in the social condition. We first compared a learning-independent Win-stay/Lose-shift (WSLS) heuristic and either a Rescorla-Wagner Q-learning or a Bayesian learning model (Dynamic Belief Model/DBM) paired with a Softmax reward maximization policy. DBM+Softmax provided the best fit of the data for most participants (31/33). Individual DBM parameters of prior reward expectation, reward learning (i.e., perceived stability of reward rates), and Softmax reward maximization were estimated and compared across conditions.
Results:
Participants did not differ in their reward learning parameters across monetary and social conditions (t(30)= -0.70, p = 0.490), suggesting similar perception of reward stability in both modalities. However, higher Bayesian prior mean (i.e., initial belief of reward rate; t(30)= -2.31, p = 0.028, d=0.42) and greater reward maximization (i.e., Softmax parameter; t(30)= -2.26, p = 0.031, d=0.41) were observed in response to social vs monetary rewards. In the social reward condition, higher self-reported social connectedness was associated with greater model fit of our DBM model (i.e., smaller Bayesian Information Criterion/BIC; r = -0.38, p = 0.041). In this condition, those expecting higher reward rates when initiating reward exploration (those with higher DBM prior mean) endorsed lower self-esteem (Spearman's ρ = -0.43, p = 0.078) and lower positive affect (ρ = -0.32, p = 0.078).
Conclusions:
A Bayesian learning modeling framework can characterize mechanistic differences in the processing of social vs non-social reward among combat-exposed Veterans. Individuals with higher social connectedness were more model-based in their performance, consistent with the notion that they are more likely to estimate and anticipate how much social peers have to offer. Combat-exposed individuals with lower self-esteem and positive affect appear to have higher initial expectations of reward from unknown partners, which could reflect greater need for mood and/or self-esteem repair in those individuals. Overall, Bayesian modeling of social reward behavior provides a useful quantitative framework to predict clinically relevant construct of functional outcomes in military populations.
Depression and borderline personality disorder (BPD) are frequently comorbid psychiatric disorders that reliably share deficits in executive functioning (EF). In addition to EF, meta-analytic evidence indicates that processing speed and verbal memory are also affected in depression and BPD, but the impact of BPD further spans the domains of attention, nonverbal memory, and visuospatial abilities. Suicidality is a notable phenotypic commonality in depression and BPD. Neuropsychologically, there are consistent discrepancies between individuals who have and have not thought about suicide in global cognitive functioning, as well as between those who have attempted suicide and those who have just thought about suicide in EF. This study aims to replicate the effect size differences between these groups and explore whether neuropsychological functioning relates to dimensional measures of psychopathology.
Participants and Methods:
Right-handed women between the ages of 18 and 55 were recruited into one of three diagnostic groups: a) current major depressive episode (MDD; n=22); b) current major depressive episode with comorbid BPD (MDD+BPD; n=19); and c) absence of current major depressive episode and BPD (controls; n=20). Groups were also classified based on historical suicide attempt and on the presence or absence of historical suicidal ideation. Exclusions included bipolar disorder, neurodevelopmental disorder, moderate/severe brain injury, neurological illness, serious physical illness, eating disorder, and moderate/severe alcohol/substance use disorder. Participants were administered the Zanarini Rating Scale for Borderline Personality Disorder (ZAN-BPD), Beck Depression Inventory (BDI-II), Interpersonal Needs Questionnaire (INQ), UPPS-P Impulsive Behavior Scale, Everyday Memory Questionnaire, Brief Visuospatial Memory Test (BVMT), California Verbal Learning Test (CVLT), Delis-Kaplan Executive Function System (D-KEFS) Color-Word Interference Test, D-KEFS Trail Making Test, D-KEFS Verbal Fluency, Wechsler Adult Intelligence Scale-IV Coding and Digit Span subtests, Wechsler Memory Scale-IV Logical Memory, and Wechsler Test of Adult Reading.
Results:
With one exception, analyses of raw scores indicated there were no significant neuropsychological differences between groups based on diagnosis, historical suicidal ideation, and suicide attempt (p>.05). However, individuals with MDD+BPD, historical suicidal ideation, or suicide attempt endorsed more memory complaints than the other groups with large effect size differences. Differences in self-reported impulsivity indicated large effects between controls and MDD+BPD, moderate to large effects when comparing controls to MDD and MDD to MDD+BPD, and moderate effects among the suicidal ideation and suicide attempt groups. Impulsivity was rated highest in those with MDD+BPD, historical suicidal ideation, or suicide attempt. These analyses applied false-discovery rate correction and adjusted for age. Using ridge regressions to separately predict depressive symptoms, BPD symptoms, and suicide risk factors, neuropsychological indices were most associated with suicide risk factors and explained 22.8% of INQ variance. Conversely, these indices explained 9.6% of ZAN-BPD variance and 0.6% of BDI-II variance.
Conclusions:
The neuropsychological literature on BPD describes moderate crosscutting neuropsychological dysfunction, and clarifying the distinct cognitive alterations associated with comorbid psychiatric disorders and suicide phenomena offers novel avenues of research for investigating their mechanisms. While neuropsychological functioning may not strongly relate to psychiatric symptomatology, it may contribute to meaningful algorithms of suicide risk in individuals with depression and BPD.
Despite the prevalence of aphasia in Morocco, standardized quick assessment tools are not available for use with patients in acute stroke care. The present study set out to (1) describe the processes of linguistic adaptation of a Moroccan Arabic (MA) version of the Bedside Western Aphasia Battery-Revised (WAB-R), (2) examine the test’s sensitivity to the detection of aphasia in an acute clinical setting, and (3) measure the instrument’s ability to detect improvement in language ability in the acute period.
Participants and Methods:
To achieve the first objective, the English Bedside WAB-R was adapted to Moroccan Arabic by a group of linguists. The instrument’s psychometric properties were established by (1) ascertaining the test’s sensitivity to the presence of aphasia, and (2) verifying the tool’s validity and reliability. Participants included a group of age- and education-matched non-brain-damaged individuals (N = 106), a group of right hemisphere brain-lesioned patients (N = 20), and a group of left hemisphere aphasic patients (N = 52). To accomplish the second and third objectives, the Bedside MA-WAB-R was administered to a group of aphasic participants in the acute period (less than three months post-stroke), and a group of age- and education-matched participants (N = 20). Aphasic patients in the acute stage were tested twice on a seven-day interval (3 days and 10 days post-onset). All data were collected from the Neurology department at the University Medical Hospital Hassan II, and the study received approval from the ethics committee of the Faculty of Medicine and Pharmacy, Sidi Mohammed Ben Abdellah.
Results:
Regarding the first objective, the results indicated that the MA-WAB-R is sensitive to the presence of aphasia, as revealed by the significantly worse performance of the aphasic group on all subtests relative to matched normal and right-hemisphere participants (p = .000). Analyses revealed excellent content and construct validity (correlations between subtests and AQ ranging from .5 to .8) as well as high inter-rater reliability, intra-rater reliability and test-retest reliability (ICC (2,1) > .9). For the second and third objectives, the results supported the test’s sensitivity to the detection of aphasia in the acute phase, as confirmed by the significantly worse performance of aphasic patients relative to matched normal controls (p = .000). The instrument also proved as a reliable measure of language improvement in the acute period, as supported by better scores on the second testing point relative to the first across all subtests.
Conclusions:
The MA-WAB-R is the first standardized assessment tool that can be used for a quick but reliable screening of aphasia in both chronic and acute clinical settings. The test can inform the initial diagnosis of aphasia, and guide a more comprehensive assessment of patients’ spared and impaired linguistic abilities within a context receiving little attention in the aphasia literature.
Neuropsychological test norms are developed as a reference point for assessing normal and abnormal test performance (Manly & Echemendia, 2007; Mitrushina et al., 2005). However, these norms are often created without considering the cultural experiences that influence neuropsychological test performance in ethnically diverse individuals. Since the Soviet Union’s collapse, approximately 2.66 million people migrated to different countries, with one of the most popular destinations being the United States (Tishkov, Zayinchkovskaya, & Vitkovskaya, 2005). The objective of this study was to examine whether specific cultural factors can significantly influence Former Soviet Union’s neuropsychological test performance on the California Verbal Learning Test-Second Edition Short Form (CVLT-II-SF).
Participants and Methods:
A total of 66 fluent, English-speaking first- or second-generation healthy immigrants from the Former Soviet Union participants were recruited from the greater Los Angeles area for this study. Participants ranged in age from 18 to 75 years old. Participants were administered the CVLT-II-SF as part of a larger battery. This shorter version of the CVLT-II requires participants to learn 9 words that fall into 3 different categories over 4 learning trials. This is followed by distractor task, free recall of the 9 items and free recall of the items again after 10 minutes, followed by recall with cuing of the categories. A questionnaire designed to assess the participants’ various cultural experiences was given and include the amount of education that was obtained outside of the U.S. as well as the percentage of time they spoke English growing up. Finally, all participants completed an acculturation measure.
Results:
Correlation analysis was performed in order to assess which cultural factors significantly correlated with the CVLT-II-SF variables. The results revealed that two of the cultural factors (percentage of education that was obtained outside of the U.S. and the acculturation score) are significantly correlated with several neuropsychological variables. Stepwise regression analysis was then used to further examine the best cultural predictors of CVLT-II-SF variables. This analysis revealed that the percent of education obtained outside of the U.S. significantly predicted the total learning trial scores, the long free recall trial, and the long-cued recall trials, while the acculturation scores significantly predicted the short free recall trial.
Conclusions:
The results of this study indicate that specific cultural factors should be taken into account when interpreting the test results of immigrants of former Soviet Union individuals. More specifically, acculturation and the amount of education obtained outside of the U.S. are important factors to consider.
Traditional methods of assessing performance validity have numerous weaknesses, among them, results can be consciously manipulated by examinees who wish to feign cognitive impairment. This study tested the ability of pupillary dilation patterns during a performance validity test (PVT) to enhance diagnostic accuracy in discriminating true from feigned impairment of traumatic brain injury (TBI). Pupillometry provides information about physiological and psychological processes related to cognitive load, familiarity, and deception and is outside of conscious control. Patrick, Rapport, Kanser, Hanks, and Bashem (2021) established proof of concept for the utility of pupillometry with PVTs applied to the Test of Memory Malingering (TOMM). This study replicated and extended this work by evaluating the incremental utility of pupillary-derived indices on the Warrington Recognition Memory Test for Words (RMT).
Participants and Methods:
Participants included 214 adults in three groups: adults with bona fide TBI (TBI; n = 51) healthy comparisons instructed to perform their best (HC; n = 72), and healthy adults instructed and incentivized to simulate cognitive impairment due to TBI (SIM; n = 91). Moreover, this study examined pupillary pattern differences among successful (i.e., failed < 1 PVT and performed impaired on cognitive tests) and unsuccessful (i.e., failed > 2 PVTs or did not score impaired on a cognitive test) SIM, including SIM who did and did not fail the RMT. The RMT was administered in the context of a comprehensive neuropsychological battery. Indices included two pure pupil dilation (PD) indices: a simple measure of baseline arousal (PD-Baseline) and a nuanced measure of dynamic engagement (PD-Range). A pupillo-behavioral index was also evaluated: Dilation-response inconsistency (DRI) captured the frequency with which examinees displayed a pupillary familiarity response to the correct answer but selected the unfamiliar stimulus (incorrect answer).
Results:
The results generally replicated Patrick et al. (2021), as all three indices were useful in discriminating between groups and provided incremental utility to traditional accuracy scores. PD-Baseline appeared sensitive to oculomotor dysfunction due to TBI (i.e., increasing accurate identification of that group); adults with TBI displayed significantly lower chronic arousal as compared to the two groups of healthy adults (SIM, HC). In fact, the TBI group showed significantly lower PD-Baseline than both unsuccessful simulators who were detected as feigners and successful simulators who passed PVTs but effectively feigned TBI on other tests. Dynamic engagement (PD-Range) yielded a hierarchical structure such that SIM were more dynamically engaged than TBI followed by HC. As predicted, simulators engaged in DRI significantly more frequently than other groups. Moreover, DRI added unique information to RMT accuracy in classifying unsuccessful simulators from all other groups. Each of these three pupillary indices showed large effect sizes, and logistic regressions indicated that each contributed unique variance in predicting group membership on one or more of the paired contrasts (i.e., SIM-TBI, SIM-HC, HC-TBI).
Conclusions:
Taken together, the findings support continued research on the application of pupillometry to performance validity assessment: Pupillometry provided unique information in enhancing classification accuracy beyond traditional PVT accuracy scores. Overall, the findings highlight the promise of biometric indices in multimethod assessments of performance validity.
Memory complaints have been a concern of Gulf War (GW) veterans since their return from the war in 1991, and over time it has been reported that exposures to neurotoxicants during the war have been associated with memory decline from premorbid levels. However, many of the studies that have shown slight or no memory decrements only looked at one time point and have not followed participants to document trajectory of symptoms over time. Longitudinal design is an optimal way to document change in cognitive function over time and the Fort Devens cohort (FDC), the longest running cohort of GW veterans, is ideal for assessing such change. This prospectively designed non-treatment seeking cohort were assessed at multiple timepoints with neuropsychological assessments and surveys. Initial neuropsychological assessments from 1997 showed above average scores on tests of verbal memory (California Verbal Learning Test) and average nonverbal memory (Wechsler Memory Scale-R) performances. A follow-up study of neuropsychological testing was completed between 2019-2022. This study was designed to document change in cognitive status between the two time points.
Participants and Methods:
Participants (N=50) from the original 1991 cohort were again tested from 2019-2022. Neuropsychological tests included California Verbal Learning Test-Second edition (CVLT2) for verbal learning, and the visual reproduction subtest from the Wechsler Memory Scale-Revised (WMS-R) for nonverbal learning and memory. For both time points, the average scores of the participants were compared with age scaled scores for each neuropsychological test.
Results:
The mean age of our current participants was 58 years. 72% were men. Relative to standardized test norms at the first time point, the scores for total learning from trials 1 through 5 from the CVLT2 were in the above average range relative to age and gender-based norms. During the second time point, the participants average scores on the same scale had dropped to the average range, one full standard deviation below their prior performances. In addition, at the first time point, total learning from visual reproductions was in the average range and dropped to the low average range for the second time point. This value dropped by one-half a standard deviation.
Conclusions:
Results showed significant diminishment in verbal and visual memory relative to prior test performances. Whenever possible, documenting the trajectory of symptoms relative to where each participant started on neuropsychological functional outcomes is key to understanding the longitudinal impact of neurotoxicant and other war-related exposures in military veterans. Given this decline, further assessment of GW veterans’ cognitive trajectories is warranted.