We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Subthreshold depressive symptoms are both prevalent and associated with negative outcomes in older adults, including conversion to major depressive disorder and other medical conditions. Antidepressants are not recommended as first-line or sole intervention for subthreshold depression; thus, finding other efficacious interventions is important. In depressed adults, transcranial direct current stimulation (tDCS) applied to the frontal lobe has antidepressant properties and pairing tDCS with cognitive training results in additional benefit due to enhancement of frontal cortical activity. However, these studies have primarily targeted depressed adults under age 65 years and less is known about whether this intervention combination is beneficial or affects subthreshold depressive symptoms in older adults.
Participants and Methods:
We are reporting secondary data analyses from Nissim et al. (2019), who recruited 30 non-demented healthy older adults and randomized them to receive active or sham tDCS in combination with cognitive training for 2 weeks. Active tDCS was delivered bifrontally over F3 (cathode) and F4 (anode) for 20-min at 2 mA intensity through two 5x7 cm2 saline saturated sponge electrodes using the Soterix Medical 1x1 tDCS clinical trials device. Sham tDCS had identical set-up with 2 mA stimulation for 30-sec with 30-sec ramp up and down. Cognitive training was administered for 40-min daily using attention/processing speed and working memory modules from BrainHQ. The first 20-min of cognitive training was paired with active or sham tDCS. To allow room for symptom improvement, we only included participants with Beck Depression Inventory, 2nd edition (BDI-II) scores of 5 or greater ("minimal" depression severity). We identified 15 participants who met this cut-off (70.93 ± 5.41 years old, 10 females, 16.4 years ± 2.32 years education, MoCA = 27.27 ± 2.34; 7 active, 8 sham).
Results:
tDCS conditions did not significantly differ in age, sex, years of education, MoCA scores, number of completed intervention days, or baseline BDI-II (active: 7.71 ± 2.93, sham: 11.38 ± 6.44). There were no differences in sensation ratings between groups or in confidence ratings for condition received (suggesting successful blinding). Results indicated the combination of active (and not sham) tDCS with cognitive training was associated with reduced depressive symptoms (2.7 vs. 1.4 points, active vs. sham). Including covariates (age, sex, education, MoCA scores, and number of completed intervention days) in the model further strengthened this discrepancy (3.7 vs. 0.51 points, active vs. sham).
Conclusions:
While preliminary, these results suggest this intervention combination may be a potential method for improving subthreshold depressive symptoms in older adults via targeting prefrontal neural circuitry and promoting neuroplasticity of the underlying neural network. While baseline BDI-II scores did not significantly differ, the active tDCS group had a lower score than sham, but saw greater improvement in BDI-II scores post-intervention despite having less room for change. Adequate treatment of subthreshold depressive symptoms may prevent or reduce negative outcomes associated with depressive symptoms in at-risk older adults. Larger randomized clinical trials are needed to better understand tDCS plus cognitive training antidepressant effects in this age group.
The current study aimed to examine real-time associations between non-cognitive symptoms and cognitive dysfunction (latter measured both objectively and subjectively in real-time) using ecological momentary assessment (EMA).
Participants and Methods:
Forty-five persons with MS completed EMA four times per day for three weeks. For each EMA, participants completed mobile versions of the Trail-Making Test part B (mTMT-B) and a finger tapping task, as well as surveys about symptom severity. Trait (usual levels of a symptom) and state (when symptom level was higher or lower than the individual's usual levels) aspects of each symptom's severity were calculated. Multilevel models were conducted to account for within-person clustering, with performance on the mTMT-B and self-reported rating of cognitive dysfunction as primary outcomes.
Results:
A total of 3,174 EMA sessions were collected; compliance rate was 84%. There was significant intra-day variability in mTMT-B performance, anxiety, fatigue, and pain. More severe state depressive symptoms predicted lower performance on the mTMT-B in real-time. Self-reported difficulties with sleeping the night before predicted mTMT-B performance the following day. In contrast, state (but not trait) fatigue, depression, anxiety, and pain all predicted self-reported cognitive dysfunction in real time. Further, state self-reported cognitive dysfunction (but not mTMT-B performance) was associated with a higher perceived sense of accomplishment.
Conclusions:
Self-reported cognitive dysfunction was more susceptible to influences of other MS symptoms (especially when the symptom is more severe than the individual's usual levels) and better predicted perceived sense of accomplishment than objectively measured executive functioning in real-time. Objective executive functioning performance was sensitive to effects of depressive symptoms and sleep difficulties. The current study demonstrated the feasibility of assessing real-time associations among MS symptoms using smartphone-administered EMA.
Epilepsy is the third most common neurological disorder among older adults, and as adults are living longer, the incidence of epilepsy is increasing (Kun Lee, 2019). The purpose of this study is to examine 1. differences in quality of life (QOL) between older and younger adults with medically intractable epilepsy and 2. the impact of seizure frequency, seizure duration, depression, sex, and marital status on QOL. Given differences in the prevalence rates of depression between men and women and importance of depression in QOL, we predicted that sex and marital status would moderate the effect of depression on total QOL (TQOL).
Hypothesis I: Compared to younger adults, older adults with epilepsy will report lower TQOL scores and lower scores on subscales measuring energy/fatigue, cognition, and medication effects. Hypothesis II: Seizure variables and depression will significantly account for TQOL scores in both groups (younger and older) above demographic variables (sex, marital status, and education). Hypothesis III: Sex will moderate the effect of depression in both groups and marital status will moderate the effect of depression only in the older adults.
Participants and Methods:
Participants were 607 adults (> 18 years old) who were prospective candidates for epilepsy surgery and underwent a comprehensive neuropsychological evaluation including QOL assessment using the Quality of Life in Epilepsy Scale-31 (QOLIE-31). Individuals were grouped by older (> 50 years old; N = 122) and younger adults (< 50 years old; N = 485). Hierarchical regression was used to examine the proposed associations.
Results:
Hypothesis I: In contrast to our hypothesis, a one-way ANOVA did not reveal significant differences between the older and younger groups on the QOL subscales, TQOL, or depression.
Hypothesis II: For older adults, longer seizure duration was associated with better TQOL; bivariate correlations showed no evidence of statistical suppression. Higher depression scores were associated with worse TQOL. Overall, the model accounted for 39.6% of variance among older adults. For younger adults, only depression was a significant predictor of TQOL wherein higher depression scores were associated with worse TQOL. Overall, the model accounted for 36.1% of the variance among younger adults. Hypothesis III: There was no moderation between depression and marital status in older or younger adults (b = -.009, p > .05). There was multicollinearity evidenced by VIF (variance inflation factor) greater than 10, so the associations between depression and sex could not be examined.
Conclusions:
Overall, there were no significant differences between QOL in younger versus older adults. Greater depression symptoms were associated with lower TQOL in both groups. Longer seizure duration was a significant predictor of better TQOL in older adults only, perhaps indicating better adjustment to having a seizure disorder with longer duration of epilepsy. Lastly, marital status did not moderate the effects of depression on TQOL and the moderating effects of sex on TQOL could not be assessed due to multicollinearity. Study limitations include dichotomizing the sample into these particular age groups and the heterogeneity of seizure types.
Gliomas are a group of CNS neoplasms arising from neuroglial cells with various degrees of aggressiveness. Resection of brain tumors is complex to perform without neurological sequelae due to the diffuse nature of the tumors. This study aimed to design a neuropsychological battery to examine pre surgical cognitive deficits in a case series of patients with LGG and to determine the post-surgical effects after resection.
Participants and Methods:
11 adult patients aged 19-65 years (38 ±DS:15.5) with a diagnosis of LGG without cognitive complaint were evaluated with a selection of specific neuropsychological tests to identify possible baseline cognitive deficits and their evolution after tumor resection. All participants completed a comprehensive neuropsychological evaluation assessing memory, language, attention, executive functions, visuospatial functions, social cognition, praxis, agnosias, functionality, mood, and quality of life. The neuropsychological battery design was based on a systematic review of the literature on surgical interventions in low-grade gliomas.
Results:
Despite not reporting subjective cognitive complaints, patients showed deficits in multiple cognitive domains in the pre-surgical evaluation when comparing their performance with normative values adjusted for age, sex, and education. Deficits in executive functions and attention were observed: 36% presented failures in graphomotor speed (TMT A), 27% of subjects presented failures in attentional span (Direct Digit Span), working memory (Inverse Digit Span), and cognitive flexibility (Wisconsin Card Sorting Test) and 9% presented difficulties in processing speed (Trail Making Test A) and inhibitory capacity (Stroop Test). Memory: 18% of the patients showed deficits in immediate logical memory and 9% in delayed memory (Craft Story 21). Likewise, 18% of the patients presented compromise in immediate auditory-verbal learning and 27% in delayed auditory-verbal learning (Rey Auditory-Verbal Test). Language: 18% showed failures in naming (Boston 60) and 9% in comprehension (Token Test). Likewise, 27% of the patients presented difficulties in social cognition (Mind in the Eyes Test). Finally 41% of the patients reported symptoms of depression and/or anxiety in the neuropsychiatric questionnaires.
Conclusions:
The results highlight the importance of strategically designed pre-surgical cognitive assessment for the detection and follow-up of cognitive and mood disorders associated with the location of the space-occupying lesion (LOE). The patients assessed in this study will be evaluated three months after surgery to document changes in baseline cognitive symptoms. Furthermore, in patients with lesions in the left hemisphere, an intraoperative evaluation will be performed to minimize subsequent deficits, assessing these functions during surgery and emphasizing language.
Area Deprivation Index (ADI) is a measurement of neighborhood disadvantage. Evidence suggests that living in a disadvantaged neighborhood has a negative impact on health outcomes independent of socioeconomic status, including increased risk for Alzheimer's disease (AD). However, less is known about the biological mechanisms that drive these associations. We examined how ADI influences structural imaging variables and cognitive performance in community-dwelling older adults. We hypothesized that greater neighborhood disadvantage would predict atrophy and worse cognitive trajectory over time.
Participants and Methods:
Participants included the legacy cohort from the Vanderbilt Memory and Aging Project (n=295, 73±7 years of age, 16±3 years of education, 42% female, 85% non-Hispanic White) who lived in the state of Tennessee. T1-weighted and T2-weighted fluid-attenuated inversion recovery brain MRIs and a comprehensive neuropsychological assessment were acquired at baseline, 18-month, 3-year, 5-year and 7-year follow-up time (mean follow-up time=5.2 years). Annual change scores were calculated for all neuropsychological and structural MRI outcome variables. Baseline state ADI was calculated using the University of Wisconsin School of Medicine and Public Health Neighborhood Atlas (Kind & Buckingham, 2018) and was based on deciles where 1 represents the least deprived area and 10 represents the most. Mixed effects regression models related baseline ADI to longitudinal brain structure (volume, thickness, white matter hyperintensities) and neuropsychological trajectory (one test per model). Analyses adjusted for age, sex, race/ethnicity, education, Framingham Stroke Risk Profile score, (apolipoprotein) APOE-e4 status, cognitive status, and intracranial volume (for MRI outcomes). Models were repeated testing interactions with APOE-e4 status, sex, and cognitive status. A false discovery rate (FDR) correction for multiple comparisons was performed.
Results:
On average, the sample was from relatively less disadvantaged neighborhoods in Tennessee (ADI state decile=2.4±1.8). Greater neighborhood disadvantage at study entry predicted more thinning of an AD-signature composite over time (ß=-0.002, p=0.005, pFDR=0.06); however, all other models testing MRI and neuropsychological outcomes were null (p-values>0.05, pFDR-values>0.51). Baseline ADI interacted with sex on longitudinal cortical thinning captured on the AD-signature composite (ß=0.004, p=0.006, pFDR=0.08) as well as several longitudinal cognitive outcomes including an executive function composite score (ß=0.033, p<0.001, pFDR=0.01), naming (ß=0.10, p=0.01, pFDR=0.12), visuospatial functioning (ß=0.083, p=0.02, pFDR=0.09), and an episodic memory composite score (ß=0.021, p=0.02, pFDR=0.07). In stratified models by sex, greater ADI predicted greater cortical thinning over time and worse longitudinal neuropsychological performance among men only. All stratified models in women were null except for executive function composite score, which did not survive correction for multiple comparisons (ß=-0.013, p=0.03, pFDR=0.61). Interactions by APOE-e4 and cognitive status were null (p-values>0.06, pFDR-values>0.61).
Conclusions:
Among community-dwelling older adults, greater neighborhood disadvantage predicted greater cortical thinning over the mean 5-year follow-up in anatomical regions susceptible to AD-related neurodegeneration. Neighborhood disadvantage also interacted with sex on cortical thickness and several cognitive domains, with stronger effects found among men versus women. By contrast, there were no interactions between neighborhood disadvantage and genetic risk for AD or cognitive status. This study provides valuable evidence for sociobiological mechanisms that may underlie health disparities in aging adults whereby neighborhood deprivation is linked with neurodegeneration over time.
A growing body of research demonstrates that social determinants of health (SDOH) are important predictors of neurocognitive and psychological outcomes in survivors of pediatric brain tumor (PBT). Existing research has focused primarily on individual level SDOH (e.g., family income, education, insurance status). Thus, more information is needed to understand community level factors which may contribute to health inequities in PBT survivors. This study aimed to examine the effects of specific aspects of neighborhood opportunity on cognitive and emotional/behavioral outcomes among PBT survivors.
Participants and Methods:
The sample included clinically-referred PBT survivors who completed a neuropsychological evaluation (N=199, Mage=11.63, SD= 4.63, 56.8% male, 71.8% White). Data included an age-appropriate Wechsler Scale and parent-report questionnaires (Behavior Rating Inventory of Executive Function, Child Behavior Checklist). Nationally-normed Child Opportunity Index (COI) scores were extracted for each participant from electronic medical records based on home address using Census tract geocoding. The COI measures neighborhood-level quality of environmental and social conditions that contribute to positive health. It includes three component scores assessing distinct aspects of opportunity, which include educational opportunity (e.g., educational quality, resources, and outcomes), health/environmental opportunity (e.g., access to healthy food, healthcare, and greenspace) and social/economic opportunity (e.g., income, employment, poverty). Stepwise linear regression models were examined to identify significant predictors of cognitive/psychological outcomes associated with PBT; the three COI indices were entered as predictors and retained in the model if they significantly contribute to variance in the outcome.
Results:
Lower educational opportunity was associated with lower processing speed performance (Wechsler Processing Speed Index: t = 2.47, p = 0.02) and increased parent-reported executive functioning problems (BRIEF GEC: t = -2.25, p = 0.03; BRIEF Working Memory: t = -2.45, p = 0.02) and externalizing problems (CBCL Externalizing: t = -2.19, p = 0.03). Lower social/economic opportunity was associated with lower working memory performance (Wechsler Working Memory Index: t = 2.63, p < 0.01) and increased parent-reported internalizing problems (CBCL Internalizing: t = -2.38, p = 0.02). Health/environmental opportunity did not emerge as a primary predictor of any of the examined cognitive/psychological outcomes. Exploratory analyses examining the impact of age on associations between COI and cognitive/psychological outcomes found a significant moderation effect of age on the relationship between educational opportunity and processing speed (t = 2.35, p = 0.02) such that this association was stronger at older ages. There were no other moderation effects by age.
Conclusions:
Consistent with a growing body of literature demonstrating the impact of social and environmental contexts to health outcomes, these results show inequities in neurocognitive and psychosocial outcomes in PBT survivors related to neighborhood-level SDOH. Examination of specific neighborhood factors highlight educational and social/economic factors as particularly important contributors to neurocognitive/psychological risk for survivors. The identification of these specific and potentially modifiable risk factors is crucial to inform individual-level problem-prevention following oncological treatment, as well as community-level policy and advocacy efforts.
There is a dearth of an appropriate standardized tool to assess neuropsychological functions in rural population, which has low literacy rates, are culturally diverse, and have limited access to healthcare resources. The NIREH Neuropsychological Battery for Rural Population (NINB-RP) is a relatively brief and easy-to-administer battery comprising multiple tests that are modified or adopted as per rural community settings to evaluate verbal learning, fine coordination, attention efficiency, executive task, concentration, and visual attention, mental flexibility, and motor coordination in rural populations. The present study aimed to examine the clinical validity and establish cut-off scores for impairment of neuropsychological functions for different age, gender, and education levels of NINB-RP in a rural community in central India.
Participants and Methods:
This was a prospective cross-sectional study conducted in participants aged > 18 years (n=2952, M: F=1407:1545) recruited through a stratified sampling technique from 23 randomly selected villages from central India. The data of nine neuropsychological tests [(Finger and Tweezer dexterity test (FDT, TDT); Digit Forward and Backward test (DFT, DBT); Serial subtraction test (SST); Trail Making-A and B; Finger Tapping test (FTT); and Letter Digit Substitution test, LDST)] from 215 cognitively impaired and 2737 healthy control subjects were analyzed. The tests were performed in a village school/community hall or an outdoor camp. Independent sample t-test, Chi-square test, and Receiver Operating Characteristic (ROC) curve were used to calculate the area under the curve (AUC), cut-off scores, and sensitivity (ST)/specificity (SP) values for seven conditions, i.e., gender (male vs. female), age groups (up to 49 years and above 50 years); and educational levels (illiterate, intermediate and college). For those variables where ST/SP values were lower than 0.70, a unique cut-off score was calculated for the entire sample, adjusting by age and educational levels.
Results:
A significant difference in mean (median) scores between the healthy control and cognitively impaired groups were observed in all tests except Trail Making A and B and LDST. The AUC for most of the tests ranged from 0.70 to 0.81, and the ST/SP values ranged from 69-73% and 65-75%, respectively. The results showed that most tests of NINB-RP reached moderate to good sensitivity and specificity for gender, age and education levels, except for DBT for females, above 50 years, and illiterate and intermediate education groups. FDT for males [AUC: 0.85 (95%CI0.80-0.91], ST/SP=76/82%] and females [(AUC=0.78 (95%CI0.74-0.82), ST/SP=71/70%], TDT for intermediate education group [AUC=0.82 (95%CI0.60-1.00), ST/SP=86/83%] and FTT for less than 49 years age group [AUC=0.75 (95%CI0.67-0.84), ST/SP=71/76%] were the most useful tests to discriminate among healthy control and cognitively impaired rural population.
Conclusions:
The present study is an attempt to establish the cut-off scores of a neuropsychological battery for a large rural population in the community setting. The proposed cut-off values might be helpful in clinical assessment in rural areas where clinical neuropsychology services are not readily available. NINB-RP can be a valuable tool for clinical research studies in rural communities. Further studies on similar samples in other countries need to be undertaken.
Understanding healthcare information is an important aspect in managing one’s own needs and navigating a complex healthcare system. Health numeracy and literacy reflect the ability to understand and apply information conveyed numerically (i.e., graphs, statistics, proportions, etc.) and written/verbally (i.e., treatment instructions, appointments, diagnostic results) to communicate with healthcare providers, understand one’s medical condition(s) and treatment plan, and participate in informed medical decision-making. Cognitive impairment has been shown to impact one’s ability to understand complex medical information. The purpose of this study is to explore the relationship between the degree of cognitive impairment and one’s ability to perform on measures of health numeracy and literacy.
Participants and Methods:
This cross-sectional study included data from 38 adult clinical patients referred for neuropsychological evaluation for primary memory complaints at an urban, public Midwestern academic medical center. All patients were administered a standardized neurocognitive battery that included the Montreal Cognitive Assessment (MoCA), as well as measures of both health numeracy (Numeracy Understanding of Medicine Instrument-Short Version [NUMI-SF]) and health literacy (Short Assessment of Health Literacy-English [SAHL-E]). The sample was 58% female and 60% Black/40% White. Mean age was 65 (SD=9.4) and mean education was 14.4 years (SD=2.5). The sample was further split into three groups based on cognitive diagnosis determined by comprehensive neuropsychological assessment (i.e., No Diagnosis [34%]; Mild Cognitive Impairment [MCI; 29%]; Dementia [34%]).Groups were well matched and did not statistically differ in premorbid intellectual functioning (F=1.96, p=.157; No Diagnosis, M=100, SD=7.92; MCI, M=99, SD=8.87; Dementia, M=94, SD=7.72) ANOVAs were conducted to evaluate differences between clinical groups on the MoCA, NUMI-SF, and SAHL-E. Multiple regressions were then conducted to determine the association of MoCA scores with NUMI-SF and SAHL-E performance.
Results:
As expected, the Dementia group performed significantly below both the No Diagnosis and MCI groups on the MoCA (F=19.92, p<.001) with a large effect (ηp2=.540). Significant differences were also found on the NUM-SF (F=5.90, p>.05) and on the SAHL-E (F=6.20, p>.05) with large effects (ηp2=.258 and ηp2=.267, respectively). Regression found that MoCA performance did not predict performance on the NUMI-SF and SAHL-E in the No Diagnosis group (F=2.30, p=.809) or the MCI group (F=1.31, p=.321). Conversely, the MoCA significantly predicted performance on the NUMI-SF and SAHL-E for the Dementia (F=15.59, p=.001) group.
Conclusions:
Degree of cognitive impairment is associated with understanding of health numeracy and literacy information, with patients diagnosed with dementia performing most poorly on these measures. Patients with normal cognitive functioning demonstrated a significantly better understanding of health numeracy and health literacy. This study supports the notion that as cognitive functioning diminishes, incremental support is necessary for patients to understand medical information pertaining to their continued care and medical decision-making, particularly as it relates to both numerical and written information.
To describe advantages and disadvantages of using digital assessments remotely and in-person to inform clinical and research practice.
Participants and Methods:
As part of a larger study,1120 adults completed a battery of remotely administered tests (Mobile Toolbox) and a subset of this sample completed examiner administered in-person testing (NIH Toolbox® Cognition Battery). Attention was given to making the sample reflective of the US 2020 Census during participant recruitment. Of the 1120 participants, the majority of the sample were female (57%) and Caucasian (72%) and had a mean age of 45 (SD = 21). In terms of education, equal percentages had high school (34%) or some college (34%).
Results:
NIH Toolbox cognitive tests of processing speed, language, executive function, attention, and episodic memory were administered via a trained examiner and correlates of these tests were self-administered remotely via a smartphone. Using examples, we will show which aspects of cognitive assessment had the best correlations between remote self-administration and face-to-face examination and which had lower correlations.
Conclusions:
Digital remote assessments can help overcome barriers by enabling repeated testing in naturalistic conditions, reducing participant burden and expense, and increasing research accessibility for populations currently under-represented. Moreover, the ubiquity of internet-connected devices vastly increases opportunities to remotely monitor other dimensions relevant to cognition using smartphone apps and wearable sensors. In addition to improving access to testing, digitally administered assessments dramatically improve some individual’s tolerance to testing with shorter tests that can be administered via computer adaptive testing (CAT). Despite these benefits, some aspects of the cognitive assessment cannot be adequately replicated remotely and thus yield lower correlations to their examiner-administered alternatives. Clinical and research implications are discussed.
Processing speed declines with age and is a strong predictor of age-related cognitive decline in other domains, and in predicting who will need help with tasks of daily living in later years. Higher cardiorespiratory fitness (CRF) reflects better cardiopulmonary health and is related to maintenance of processing speed and cognition into late life. On the other hand, white matter lesions (WML) are reflective of age-related brain network disconnections from damage to white matter tracts in the brain. Lower CRF and higher WML burden have each been related to poorer cognitive performance. Although higher CRF provides a protective effect on cognition, the combined effects of CRF and WML on processing speed have yet to be determined. Specifically, whether CRF and WML independently affect processing speed or if WML moderates the effect of CRF on processing speed is yet to be established. We predicted WML may moderate CRF benefits on cognitive aging if CRF-related cognitive benefits are weakened by high WML load. Here, we test this question with the gold-standard measure of CRF, maximal exercise oxygen uptake (relative VO2 max, mL/kg/min) during a graded exercise test, and a validated neuropsychological measure of processing speed, the Digit Symbol Substitution Test (DSST).
Participants and Methods:
CRF, DSST scores, and WML volumes of cognitively normal adults (n=91) aged 55-80 years were included in this analysis. The WML data was corrected for total intracranial volume and was log transformed. A linear regression model included the number of accurately completed items on the DSST as the dependent variable and age, sex, relative VO2 max, WML volumes and the interaction between relative VO2 max and WML volume as the predictor variables.
Results:
Main effects of age, sex, VO2 max and WML volume on the DSST were observed. Greater age, higher WML volume, and lower relative VO2 max were associated with poorer performance on the DSST. In addition, females (n=55) performed better than males (n=36) on the DSST. No significant interaction was observed between VO2 max and WML volume on DSST scores.
Conclusions:
Our results show that 1) WML and relative VO2 max independently contribute to processing speed performance in older adults as measured by the DSST, and 2) WML do not moderate the relation between VO2 max and the DSST. Strengths of this study include gold-standard measurement of CRF and WML volumes as predictors of performance on the DSST in older adults. Further research is warranted to understand how vascular aging and brain health indicators interactively or interdependently impact cognition in aging.
The Explosive Ordinance Disposal (EOD) community within the US Military is a specialized force in charge of the most fundamental aspects of the military operations in combat which include disarming and safely disposing explosive threats. EOD technicians have provided critical protection for our military and civilians exposed to improvised explosive devices (IEDs), which became the signature threat of both Afghan and Iraq wars. The nature of the job puts EOD technicians at high risk for blast exposures (from training and combat) resulting in traumatic brain injury (TBI) and sub-concussive head impact. Further, this population is exposed to high levels of combat with psychologically traumatic events. Given the groups neurological and psychological risk factors as well as their critical role in combat, we hypothesized that EOD technicians will present with increased psychological and neurobehavioral symptoms as well decreased cognitive functioning compared to other military personnel.
Participants and Methods:
Participants were recruited from a military hospital with at least one diagnosed mild traumatic brain injury (MTBI). Exclusion criteria included TBI greater then mild severity and invalid performance on the Rey-15. Final sample included 10 EOD and 90 other military.
CognCognitive measures included Hopkins Verbal Learning Test-Reviseitive measures included Hopkins Verbal Learning Test-Revised (HVLT-R); DKEFS Color Word Condition 4 Switching (CW4), Trail Making Condition 3 Letter Sequencing (TM3) and Condition 4 Switching (TM4), and Paced Auditory Serial Addition Test (PASAT). Self-report measures included the Neurobehavioral Symptom Inventory (NSI), Key Behaviors Change Inventory (KBCI), Post-Traumatic Stress Disorder Checklist (PCL-M), Patient Health Questionnaire (PHQ), Combat Exposure Scale (CES) and Blast Exposure Threshold Survey (BETS). The Ohio State University Traumatic Brain Injury Identification Method (OSU) assessed TBI history.
Results:
EOD were older (EOD M=38.4, SD=4.06; Others M=33.32, SD=8.08; p=0.05), had a higher pre-morbid IQ (EOD M=110.90, SD=7.64; Other M=101.59, SD=10.55; p=0.008), more combat deployments (EOD M=5.5, SD=2.37; Others M=3.55, SD=2.98; p=0.049) and exposure to wartime atrocities (CES, p=0.003). They had greater number of MTBI (OSU EOD M=6.67, SD=3.33; Other M=3.67, SD=2.34; p=0.007), blast related MTBI (OSU-TBI EOD M=2.33, SD=1.63; Other M=0.67, SD=0.91; p<0.001), and exposure to large explosives (BETS p<0.0001). EOD reported better attention skills (KBCI Inattention, p=0.016, d=0.82; Impulsivity p=0.047, d=0.67). There was a trend for EOD to have lower neurobehavioral symptoms (NSI Total, d=0.32), post-traumatic stress (PCL d=0.39), and depression (PHQ d=0.50); however, despite the moderate effect sizes (p’s >0.05). EOD presented with significantly better scores on DKEFS TMT3 (p=0.037, d=0.70), HVLT-R-Total (p=0.001, d=1.10), HVLT-R-Delayed (p=0.03, d=0.74), and attention/executive functioning skills (PASAT p=0.001, d=1.12). DKEFTS CW4 Switching (d=0.51) and TMT4 Switching were approaching significance (d=0.61) with EOD performing better.
Conclusions:
As expected, the EOD sample in this study had higher number of combat deployments, greater exposure to combat atrocities (e.g., death), higher levels of exposure to large explosives, as well as a higher number of MTBI. Inconsistent with our hypotheses, despite these psychological and neurological risk factors, EOD performed better on cognitive measures of memory, attention and executive functioning. They also reported less problems with inattention and impulsivity. Results may reflect the impact of psychological and cognitive resiliency.
Cognitive impairment in Parkinson's disease (CIPD) is present in approximately 40% of patients. Language deficits, evidenced by poor word- retrieval, have historically characterized memory weaknesses in PD. That is, the "retrieval deficit hypothesis," suggests successful memory encoding, but poor retrieval subsequent to language and executive dysfunction, another prominent area of CIPD. However, recent studies suggest that memory impairments in PD are instead at the level of learning. At present, several suggested etiologies to explain learning impairments in PD exist that are not related to language, for example that processing speed deficits (another characteristic of CIPD) impact learning; however, other studies present evidence against this theory. Therefore, we hypothesize that deficits in language continue to be a primary component of memory impairment in PD, but at the level of learning rather than retrieval
Participants and Methods:
85 adults (age M = 61.54, SD = 10.00; %female = 26.7; Dementia Rating Scale M = 137.77, SD = 5.63) diagnosed with Parkinson's disease according to the UK Brain Bank criteria for idiopathic PD, completed a neuropsychological test battery when "off" levodopa medication. The battery included the Boston Naming Test (BNT), verbal fluency tests (Controlled Oral Word Association [COWA] and category fluency), the California Verbal Learning Test, 2nd Edition (CVLT-II), and the Oral Symbol Digit Modalities Test (SDMT). Separate linear regression models were used to examine BNT, COWA, category fluency, and SDMT performance as predictors of total learning (sum of trials 1-5), short-delay free recall, long-delay free recall, and recognition discriminability on the CVLT-II. Analyses were adjusted for age, sex, education, and disease severity (MDS-Unified Parkinson's Disease Rating Scale, part 3 score). Follow up analyses adjusted for processing speed (oral SDMT).
Results:
Adjusted linear regression models revealed that both verbal fluencies predicted verbal learning (letter: ß = .37, p < .01; category: ß = .45, p < .01), long-delay free recall (letter: ß = .25, p = .05; category: ß = .34, p = .01), and recognition discriminability (letter: ß = .36, p = .02; category: ß =.33, p = .03) on the CVLT-II. Confrontation naming significantly predicted only long-delay free recall (ß =.31, p = .01). Processing speed predicted verbal learning (ß = .51, p < .01), short-delay free recall (ß = .35, p = .03), and long-delay free recall (ß = .44, p < .01). After adjusting for processing speed, letter fluency significantly predicted learning (ß = .23, p = .05) and discriminability (ß = .33, p = .04). Category fluency significantly predicted learning only (ß = .28, p = .04). Finally, confrontation naming significantly predicted only long-delay free recall (ß= .28, p = .01).
Conclusions:
While processing speed was associated with verbal learning and recall, components of language predicted variance in verbal learning in PD that was not accounted for by speed. Additionally, discriminability was related to aspects of language that are more reliant on executive functioning. It is therefore suggested that verbal memory in PD is interpreted within the context of one's language ability. Other potential mechanisms and clinical implications are discussed.
PD patients commonly exhibit executive dysfunction early in the disease course which may or may not predict further cognitive decline over time. Early emergence of visuospatial and memory impairments, in contrast, are more consistent predictors of an evolving dementia syndrome. Most prior studies using fMRI have focused on mechanisms of executive dysfunction and have demonstrated that PD patients exhibit hyperactivation that is dependent on the degree of cognitive impairment, suggestive of compensatory strategies. No study has evaluated whether PD patients with normal cognition (PD-NC) and PD patients with Mild Cognitive Impairment (PD-MCI) exhibit compensatory activation patterns during visuospatial task performance.
Participants and Methods:
10 PD-NC, 12 PD-MCI, and 14 age and sex-matched healthy controls (HC) participated in the study. PD participants were diagnosed with MCI based on the Movement Disorders Society Task Force, Level II assessment (comprehensive assessment). Functional magnetic resonance imaging (fMRI) was performed during a motion discrimination task that required participants to identify the direction of horizontal global coherent motion embedded within dynamic visual noise under Low and High coherence conditions. Behavioral accuracy and functional activation were evaluated using 3 * 2 analyses of covariance (ANCOVAs) (group [HC, PD-NC, PD-MCI] * Coherence [High vs. Low]) accounting for age, sex, and education. Analyses were performed in R (v4.1.2(Team, 2013)).
Results:
PD-MCI (0.702± 0.269) patients exhibited significantly lower accuracy on the motion discrimination task than HC (0.853 ± 0.241; p = 0.033) and PD-NC (0.880 ± 0.208; p =0.039). A Group * Coherence interaction was identified in which several regions, including orbitofrontal, posterior parietal and occipital cortex, showed increased activation during High relative to Low coherence trials in the PD patient groups but not in the HC group. HC showed default mode deactivation and frontal-parietal activation during Low relative to High coherence trials that was not evident in the patient groups.
Conclusions:
PD-MCI patients exhibited worse visuospatial performance on a motion discrimination task than PD-NC and HC participants and exhibited hyperactivation of the posterior parietal and occipital regions during motion discrimination, suggesting possible compensatory activation.
From diagnosis to remission, a patient’s journey with cancer can be long and tiresome, riddled with many adjustments and challenges. Because the stressors of the disease continue into remission, the battle is far from over when the cancerous cells are eradicated. The stress placed on cancer patients due to the disease and the treatments to control it causes many patients to experience cognitive impairment, also known as cancer-related cognitive impairment (CRCI). Researchers have long been baffled by CRCI and the mechanisms through which it takes place. Some explanations that have arisen include the cancer treatment, the cancer itself, the psychological distress, or a combination of all three. The objective of this study was to understand the mechanism through which CRCI occurs and what factors, including psychosocial, treatment, and demographic variables, exacerbate or reduce the cognitive symptoms.
Participants and Methods:
Cancer survivors (n=39) with various types of cancer were recruited from support groups to complete an online survey, which was comprised of a series of self-report measures. These measures included perceived cognitive abilities, psychological distress, fatigue, social support, and demographic and treatment questionnaires.
Results:
Cognitive reserve (p < .05) and the presence of chemotherapy (p < .01) were the only variables that predicted perceived cognitive impairment. As expected, it was found that the length of time in remission led to lower levels of perceived cognitive impairment (p < .001). However, psychological distress was not found to be a significant predictor of perceived cognitive impairment as hypothesized. Remarkably, psychological distress was found to be a mediator in the relationship between perceived cognitive impairment and fatigue (p < .001).
Conclusions:
This relationship indicates that how an individual copes with the cognitive impairment following cancer treatments can lead to the development and exacerbation of fatigue. A failure to manage psychological health can lead to the worsening of these secondary symptoms. Further research must examine the link between psychosocial factors as they relate to the subtle effects of CRCI.
Emerging evidence suggests that individuals recovering from COVID-19 perceive changes to their cognitive function and psychological health that persist for weeks to months following acute infection. Although there is a strong relationship between initial COVID-19 infection severity and development of prolonged symptoms, there is only a modest relationship between initial COVID-19 severity and self-reported severity of prolonged symptoms. While much of the research has focused on more severe COVID-19 cases, over 90% of COVID-19 infections are classified as mild or moderate. Previous work has found evidence that non-severe COVID-19 infection is associated with cognitive deficits with small-to-medium effect sizes, though patients who were not hospitalized generally performed better on cognitive measures than did those who were hospitalized for COVID-19 infection. As such, it is important to also quantify subjective cognitive functioning in non-severe (mild or moderate) COVID-19 cases. Our meta-analysis examines self-reported cognition in samples that also measured objective neuropsychological performance in individuals with non-severe COVID-19 infections in the post-acute (>28 days) period.
Participants and Methods:
This study’s design was preregistered with PROSPERO (CRD42021293124) and used the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) checklist for reporting guidelines. Inclusion criteria were established prior to article searching and required peer-reviewed studies to have (1) used adult participants with a probable or documented diagnosis of non-severe (asymptomatic, mild, or moderate) COVID-19 who were in the post-acute stage (>28 days after initial infection); (2) used objective neuropsychological testing to document cognitive functioning; and (3) include a self-report measure of subjective cognition. At least two independent reviewers conducted all aspects of the screening, reviews, and extraction process. Twelve studies with three types of study design met full criteria and were included (total n=2,744).
Results:
Healthy comparison group comparison: Compared with healthy comparison participants, the post-COVID-19 group reported moderately worse subjective cognition (d=0.546 [95% CI (0.054, 1.038)], p=0.030). Severity comparison: When comparing hospitalized and not hospitalized groups, patients who were hospitalized reported modestly worse subjective cognition (d=-0.241, [95% CI (-0.703, 0.221)], p=0.30), though the difference was not statistically significant. Normative data comparison: When all non-severe groups (mild and moderate; k=12) were compared to the normative comparison groups, there was a large, statistically significant effect (d=-1.06, [95% CI (-1.58, -0.53)], p=0.001) for self-report of worse subjective cognitive functioning.
Conclusions:
There was evidence of subjective report of worse cognitive functioning following non-severe COVID-19 infection. Future work should explore relationships between objective neuropsychological functioning and subjective cognitive difficulties following COVID-19.
Dysnomia may be one of the earlier neuropsychological signs of Alzheimer’ disease (Cullum & Liff, 2014), making it an essential part of dementia evaluations. The Verbal Naming (VNT) is a verbal naming-to-definition task designed to assess possible dysnomia in older adults (Yochim et al., 2015) and has been used as an alternative to tasks that predominately rely on picture-naming paradigms. These researchers investigated the influences of age, educational level, cognitive diagnosis, educational quality, and race to examine if race would be a remaining significant factor in the performance of the VNT.
Participants and Methods:
Black (n=57) and White (n=127) participant data were collected during clinical neuropsychological evaluations, which included the VNT alongside other cognitive measures. A multiple regression was utilized controlling for age, educational level, cognitive diagnosis, educational quality via reading level, and race to investigate if race would remain a significant predictor of test performance.
Results:
Results suggested that race was still a significant predictor (p = .003) of VNT scores despite efforts to control other sources of variance. Additionally, other cognitive measures such as WAIS-IV Block Design (p = .004) and D-KEFS Tower Test (p = .004) also showed statistically significant relationships with race in the same model, whereas verbal memory (CVLT) and verbal fluency (D-KEFS) did not. The NAB Naming analysis violated the assumption of homoscedasticity; therefore, results with the NAB Naming test were not further interpreted.
Conclusions:
These results suggest that race is a significant predictor of performance on some cognitive measures, including the VNT. However, it did not predict performance on verbal memory or verbal fluency. Future investigations of racial differences on neuropsychological test performance would benefit from consideration of variables that may account for discrepancies between White and Black examinees. Several proxy variables could include educational quality, acculturation, and economic status.
Executive functions (EFs) are considered to be both unitary and diverse functions with common conceptualizations consisting of inhibitory control, working memory, and cognitive flexibility. Current research indicates that these abilities develop along different timelines and that working memory and inhibitory control may be foundational for cognitive flexibility, or the ability to shift attention between tasks or operations. Very few interventions target cognitive flexibility despite its importance for academic or occupational tasks, social skills, problem-solving, and goal-directed behavior in general, and the ability is commonly impaired in individuals with neurodevelopmental disorders (NDDs) such as autism spectrum disorder, attention deficit hyperactivity disorder, and learning disorders. The current study investigated a tablet-based cognitive flexibility intervention, Dino Island (DI), that combines a game-based, process-specific intervention with compensatory metacognitive strategies as delivered by classroom aides within a school setting.
Participants and Methods:
20 children between ages 6-12 years (x̄ = 10.83 years) with NDDs and identified executive function deficits and their assigned classroom aides (i.e., “interventionists”) were randomly assigned to either DI or an educational game control condition. Interventionists completed a 2-4 hour online training course and a brief, remote Q&A session with the research team, which provided key information for delivering the intervention such as game-play and metacognitive/behavioral strategy instruction. Fidelity checks were conducted weekly. Interventionists were instructed to deliver 14-16 hours of intervention during the school day over 6-8 weeks, divided into 3-4 weekly sessions of 30-60 minutes each. Baseline and post-intervention assessments consisted of cognitive measures of cognitive flexibility (Minnesota Executive Function Scale), working memory (Weschler Intelligence Scales for Children, 4th Edn. Integrated Spatial Span) and parent-completed EF rating scales (Behavior Rating Inventory of Executive Function).
Results:
Samples sizes were smaller than expected due to COVID-19 related disruptions within schools, so nonparametric analyses were conducted to explore trends in the data. Results of the Mann-Whitney U test indicated that participants within the DI condition made greater gains in cognitive flexibility with a trend towards significance (p = 0.115. After dummy coding for positive change, results also indicated that gains in spatial working memory differed by condition (p = 0.127). Similarly, gains in task monitoring trended towards significant difference by condition.
Conclusions:
DI, a novel EF intervention, may be beneficial to cognitive flexibility, working memory, and monitoring skills within youth with EF deficits. Though there were many absences and upheavals within the participating schools related to COVID-19, it is promising to see differences in outcomes with such a small sample. This poster will expand upon the current results as well as future directions for the DI intervention.
Fatigue, which can be classified into physical and cognitive subtypes (Schiehser et al., 2012), is a common non-motor symptom in persons with Parkinson’s disease (PD) that has no clear treatment. Cognitive changes, also common in PD (Litvan et al., 2012), may impact how patients perceive fatigue (Kukla et al., 2021). Grit is a personality trait defined as perseverance and passion towards a long-term goal, and is associated with multiple positive outcomes such as lower fatigue levels in healthy individuals (Martinez-Moreno et al., 2021). However, scarce research has examined the relationship between grit and fatigue in persons with PD. Therefore, we aimed to investigate the relationship between fatigue (cognitive and physical) and grit, as well as the impact of cognitive status (i.e., cognitive normal vs. mild cognitive impairment [MCI]) on this relationship in non-demented individuals with PD.
Participants and Methods:
Participants were 70 non-demented individuals with PD who were diagnosed as either cognitively normal (n=20) or MCI (n=50) based on Level II of the Movement Disorder Society PD-MCI criteria. Participants completed the Modified Fatigue Impact Scale (MFIS), which consists of two subscales (cognitive and physical fatigue) that are combined for a total overall fatigue score. Participants also completed the Grit Scale, which consists of items such as ambition, perseverance, and consistency. ANOVAs were conducted to determine differences in grit between PD-cognitively normal vs PD-MCI groups. Correlations and multiple hierarchical regressions controlling for significant demographics (i.e., age, education, sex), mood (i.e., depression, anxiety) and disease variables (i.e., disease duration, Levodopa equivalent dosage) with backwards elimination were conducted to evaluate the relationship between grit and fatigue (MFIS total score and MFIS cognitive and physical fatigue subscales).
Results:
There was no significant difference in grit total scores between PD patients who were cognitively normal or MCI (p = .336). Higher grit total scores predicted lower MFIS total (ß = -.290, p = .005) and lower cognitive fatigue (ß = -.336, p < .001) scores in the total sample, above and beyond relevant covariates as well as cognitive status. Grit scores were not significantly associated with physical fatigue (ß = -.206, p = .066). Furthermore, cognitive status was not a significant predictor of fatigue scores in any of the models (all p’s > .28).
Conclusions:
Findings indicate that higher levels of grit are associated with lower levels of fatigue, specifically cognitive fatigue, in individuals with PD. These results held true for those who were cognitively normal or with MCI, suggesting that grit may impact fatigue in non-demented PD patients regardless of cognitive status. These findings underscore the importance of considering grit when assessing or treating fatigue, particularly cognitive fatigue, in persons with PD.
Dementia prevalence and its costs to the health system continue to rise, highlighting the need for comprehensive care programs. This study evaluates the Care Ecosystem Program (CE) for dementia (memory.ucsf.edu/Care-Ecosystem) in New Orleans, LA and surrounding areas.
Participants and Methods:
The sample consisted of persons with dementia (PWD) and caregiver (CG) dyads enrolled in the CE from February-2019 to June-2022. Participants had a dementia diagnosis, lived in the community, and had at least one emergency department (ED) visit or hospitalization in the year prior. Healthcare utilization data was collected through self-report and electronic medical records. Dementia rating scales (QDRS, NPIQ) and caregiver wellbeing questionnaires (ZBI-12; PHQ-9; Self-Efficacy) were collected at baseline, 6-months, and 12-months. Dyads received monthly calls providing individualized care-management. One-way repeated measures Anovas were performed to identify change in utilization and caregiver wellbeing at 6-months and 12-months compared to baseline. Partial n2 effect sizes and post-hoc Bonferroni were calculated. Healthcare utilization extreme outliers were winsorized to the 95th percentile and a p-value of .05 was set.
Results:
A total of 150 dyads completed the program. PWD's age averaged 81 years (SD=8); they were mostly female (65%), White (63%), and had at least a High School education or higher (88%). CG's age averaged 65 years (SD=11.5); they were predominantly female (77%), White (63%), and had more than 12-years of education (70%). Half of the CGs were adult children (50%), followed by spouse/partners (41%). The QDRS indicated mild-moderate dementia severity, PWD had on average five neuropsychiatric symptoms, and Alzheimer's Disease was the most frequent diagnosis (35%).
A statistically significant decrease occurred in ED visits [F(1, 115)=14.970, p<.001, n2=.115] from baseline to 6-months (MD=1.043, p<.001) and 12-months (MD=.621, p<.001), while an increase was noted when comparing 12-month to 6-month data (MD=.422, p<.001). A similar pattern was observed for hospitalizations [F(1,115)=19.021, p<.001, n2=.142] were admissions were reduced significantly compared to baseline (6-month MD=.483, p<.001; 12-month MD=3.88, p<.001) and an increase was seen after the 6-month mark (MD=.095, p<.001). Caregiver self-efficacy significantly improved [F(1,115)=15.478, p<.001, n2=.119] from baseline to 6-months into the CE (MD=-1.457, p<.001) and was maintained a year after enrollment (MD=-1.474, p<.001). There were no differences in self-efficacy when comparing 6-month and 12-month data. Robust effect sizes were noted for all results previously reported. No other caregiver wellbeing measures showed significant changes over the three time points.
Conclusions:
CE successfully reduces healthcare utilization and improves caregiver self-efficacy for PWD-CG dyads 6-months and 12-months after enrollment. The utilization increase noted from the 6-month to the 12-month mark does not surpass baseline rates. This pattern is also consistent with literature reporting that healthcare utilization rises with the progression of dementia. More research is needed to identify potential moderating factors in the relationship between dementia progression and utilization. Future research will also benefit from including control groups to further understand the impact of comprehensive care programs for dementia.
Choice response time (RT) increases linearly with increasing information uncertainty, which can be represented externally or internally. Using a card-sorting task, we previously showed that Alzheimer’s disease (AD) dementia patients were more impaired relative to cognitively normal older adults (CN) under conditions that manipulated internally cued rather than externally driven uncertainty, but this study was limited by a between-subjects design that prevented us from directly comparing the two uncertainty conditions. The objective of this study was to assess internally cued and externally driven cued uncertainty representations in CN and mild cognitive impairment (MCI) patients.
Participants and Methods:
Older participants (age > 60 years; N=49 CN, N=33 MCI patients) completed a card-sorting task that separately manipulated externally cued uncertainty (i.e., the number of sorting piles with equal probability of each stimulus type) or internally cued uncertainty (i.e., the probability of each stimulus type with fixed number of sorting piles) at three different uncertainty loads (low, medium, high). Exploratory analyses separated MCI patients by etiology into possible/probable cortical neurodegenerative process (i.e., AD, frontotemporal dementia; N=13) or nonneurodegenerative process (i.e., vascular, psychiatric, sleep, medication effect; N=20).
Results:
CN and MCI patients maintained a high level of accuracy on both tasks (M accuracy > .94 across conditions). MCI patients performed more slowly than CN on the externally and internally cued tasks, and both groups showed a significant positive association between uncertainty load and RT (p’s < .05). There was a group x load x uncertainty condition interaction (p = .05). For CNs, the slope of the linear association between load and RT was significantly steeper in the externally cued compared to internally cued condition. For MCI patients in contrast, RTs increased with load to a similar degree in both conditions. Exploratory analyses showed the MCI-neurodegenerative patients were significantly slower than MCI-nondegenerative and CN (p < .001). While the group x load x condition interaction was significant when comparing all three groups (p < .05), this was driven by the differences between CN and MCI patients described above; the MCI-neurodegenerative and non-neurodegenerative groups did not significantly differ in the strength of the RT-load association between the externally or internally cued conditions.
Conclusions:
Overall, CN participants showed greater RT slowing with increasing load of externally driven than internally cued uncertainty. Though they were slower than CNs, MCI patients (even those with a possible/probable cortical neurodegenerative condition) were able to accurately perform an internally cued uncertainty task and did not show differential slowing compared to an externally driven task. This provides preliminary evidence that internal representations of probabilistic information are intact in patients with MCI due to a neurodegenerative condition, meaning they may not depend on cortical processes. Future work will increase the sample sizes of the MCI-neurodegenerative and non-degenerative groups.