We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Awareness of risk factors associated with any form of impairment is critical for formulating optimal prevention and treatment planning. Millions worldwide suffer from some form of cognitive impairment, with the highest rates amongst Black and Hispanic populations. The latter have also been found to achieve lower scores on standardized neurocognitive testing than other racial/ethnic groups. Understanding the socio-demographic risk factors that lead to this discrepancy in neurocognitive functioning across racial groups is crucial. Adverse childhood experiences (ACEs), are one aspect of social determinants of health. ACES have been linked to a greater risk of future memory impairment, such as dementia. Moreover, higher instances of ACEs have been found amongst racial minorities. Considering the current literature, the purpose of this exploratory research is to better understand how social determinants, more specifically, ACEs, may play a role in the development of cognitive impairment.
Participants and Methods:
This cross-sectional study included data from an urban, public Midwestern academic medical center. There was a total of 64 adult clinical patients that were referred for a neuropsychological evaluation. All patients were administered a standardized neurocognitive battery that included the Montreal Cognitive Assessment (MoCA) as well as a 10-item ACE questionnaire, which measures levels of adverse childhood experiences. The sample was 73% Black and 27% White. The average age was 66 (SD=8.6) and average education was 12.6 years (SD=3.4). A two-way ANOVA was conducted to evaluate the interaction of racial identity (White; Black) and ACE score on MoCA total score. An ACE score >4 was categorized as “high”; ACE <4 was categorized as “low.”
Results:
There was not a significant interaction of race and ACE group on MoCA score (p=.929) nor a significant main effect of ACE score (p=.541). Interestingly, there was a significant main effect of Race on MoCA (p=.029). White patients had an average MoCA score of 21.82 (sd=4.77). Black patients had an average MoCA score of 17.54 (sd=5.91).
Conclusions:
Overall, Black patients demonstrated statistically lower scores on the MoCA than White patients. There was no significant difference on MoCA score between races when also accounting for ACE scores. Given this study’s findings, one’s level of adverse childhood experiences does not appear to impact one’s cognitive ability later in life. There is a significant difference in cognitive ability between races, specifically Black and White people, which suggests there may be social determinants other than childhood experiences to be explored that influence cognitive impairment.
Continuous performance tests (CPT) are often considered the gold standard for the diagnosis attention-deficit/hyperactivity disorder (ADHD), particularly when parent and teacher rating scales are inconclusive. Prior work has indicated that CPT can also help differentiate between ADHD subtypes. However, the ability of CPT to differentiate ADHD subtype has not been examined among youth with comorbid ADHD and anxiety (ADHD+A). This is particularly concerning as the extant literature suggests that anxiety symptoms may exacerbate deficits associated with ADHD (e.g.. , working memory, attention) and attenuate others (e.g., inhibition); thus, anxiety may influence expected patterns on the CPT. This study therefore seeks to examine the role of ADHD subtype on the relationship between ADHD+A and performance on a CPT among youth with ADHD+A.
Participants and Methods:
Participants included 54 children ranging from 6 to 20 years old (Mage=11.83, 54% female) who were diagnosed with ADHD+A via neuropsychological evaluation. In terms of ADHD subtype, 51.9% (n=28) were diagnosed with ADHD combined or ADHD primarily hyperactive and 48.1% (n=26) were diagnosed with ADHD primarily inattentive. Approximately 46.30% (N=25) of participants were medication naive. Analyses were conducted using data from the Conners Kiddie Continuous Performance Test -Second Edition (KCPT-2), Conners Continuous Performance - Second Edition (CPT-2) and the Conners Continuous Performance - Third Edition (CPT-3), which are part of the same family of performance-based attention measures. Independent samples t-tests were conducted to examine performance differences in aspects of attention (e.g., inattentiveness, sustained attention) and hyperactivity (e.g., impulsivity, inhibition).
Results:
ADHD subtype was not significantly related to measures of inattentiveness. This includes the number of targets missed (omissions; (t(39)=-.532, p=.59)) and variability in response time (variability; (t(39)=-0.30, p=.77)). In terms of sustained attention, ADHD subtype was not related to variability in response speed across blocks (Hit SEBC/HRT Block Change; (t(39)=-0.26, p=.79)). Importantly, these results were consistent regardless of ADHD medication status. ADHD subtype was also not significantly related to impulsivity. This includes responses to nontargets (commissions; (t(39)=-1.05, p=.30)), random or anticipatory responding (perseverations; (t(39)=-0.19, p=.85)), and mean response speed of correct responses (HR; (t(39)=-0.72, p=.48)).
Conclusions:
The extant literature suggests that CPT can help clinicians differentiate between ADHD subtypes. However, the results of this study indicate that there are no performance differences on the CPT among youth with comorbid ADHD and anxiety. There are several limitations to consider. First, this study had a relatively small sample size, which also limited the ability to examine ADHD primarily hyperactive/impulsive as a distinct subtype. Additionally, this study did not examine the effect of individual anxiety disorders (i.e., generalized anxiety disorder, specific phobias). Finally, these findings may not generalize to other standardized measures of attention or more ecologically valid measures. Despite these limitations, this study is an important step in understanding the relationship between ADHD+A and performance on attention measures. Clinicians should be cautious in using results from CPT to distinguish between ADHD subtype among children with comorbid anxiety.
While loss of insight into one’s cognitive impairment (anosognosia) is a feature in Alzheimer’s disease dementia, less is known about memory self-awareness in cognitively unimpaired (CU) older adults or mild cognitive impairment (MCI) or factors that may impact self-awareness. Locus of control, specifically external locus of control, has been linked to worse cognitive/health outcomes, though little work has examined locus of control as it relates to self-awareness of memory functioning or across cognitive impairment status. Therefore, we examined associations between locus of control and memory self-awareness and whether MCI status impacted these associations.
Participants and Methods:
Participants from the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) study (mean age=73.51; 76% women; 26% Black/African American) were classified as CU (n=2177) or MCI (amnestic n=313; non-amnestic n=170) using Neuropsychological Criteria. A memory composite score measured objective memory performance and the Memory Functioning Questionnaire measured subjective memory. Memory self-awareness was defined as objective memory minus subjective memory, with positive values indicating overreporting of memory difficulties relative to actual performance (hypernosognosia) and negative values indicating underreporting (hyponosognosia). Internal (i.e., personal skills/attributes dictate life events) and external (i.e., environment/others dictate life events) locus of control scores came from the Personality in Intellectual Aging Contexts Inventory. General linear models, adjusting for age, education, sex/gender, depressive symptoms, general health, and vocabulary examined the effects of internal and external locus of control on memory self-awareness and whether MCI status moderated these associations.
Results:
Amnestic and non-amnestic MCI participants reported lower internal and higher external locus of control than CU participants. There was a main effect of MCI status on memory self-awareness such that amnestic MCI participants showed the greatest degree of hyponosognosia/underreporting, followed by non-amnestic MCI, and CU participants slightly overreported their memory difficulties. While, on average, participants were fairly accurate at reporting their degree of memory difficulty, internal locus of control was negatively associated with self-awareness such that higher internal locus of control was associated with greater underreporting (ß=-.127, 95% CI [-.164, -.089], p<.001). MCI status did not moderate this association. External locus of control was positively associated with self-awareness such that higher external locus of control was associated with greater hypernosonosia/overreporting (ß=.259, 95% CI [.218, .300], p<.001). Relative to CU, amnestic, but not non-amnestic, MCI showed a stronger association between external locus of control and memory self-awareness. Specifically, higher external locus of control was associated with less underreporting of cognitive difficulties in amnestic MCI (ß=.107, 95% CI [.006, .208], p=.038).
Conclusions:
In CU participants, higher external locus of control was associated with greater hypernosognosia/overreporting. In amnestic MCI, the lower external locus of control associations with greater underreporting of objective cognitive difficulties suggests that perhaps reduced insight in some people with MCI may result in not realizing the need for external supports, and therefore not asking for help from others. Alternatively, in amnestic participants with greater external locus of control, perhaps the environmental cues/feedback translate to greater accuracy in their memory self-perceptions. Longitudinal analyses are needed to determine how memory self-awareness is related to future cognitive declines.
Global neurocognitive impairment (NCI) has been reported in white people living with HIV/AIDS (PLWHA) in 40%. In Latino populations there have been variable rates described from 30 to 77%. This variation has to do with the lack of normative data for Latino population and the application of norms for English-speakers, increasing the probability of misidentification of NCI. Thus, recognizing which are the best norms available for the Mexican population is important for the accurate identification of NCI. The aim of the present study was to investigate the rate and pattern of HIV associated neurocognitive impairment (NCI) and to compare rates of NCI between rates calculated using norms for the Latin-American population (NLAP) and norms for the US-Mexico border region (NP-NUMBRS).
Participants and Methods:
CIOMS international ethical guidelines for the participation of human subjects in health research were followed. 82 PLWHA living in Tijuana (Mexico) participated in the study (Age: Mean=39.6, SD=10.9; 28.3% Female; Years of education: Mean=8.5, SD=3.6). PLWHA were recruited from the board-and-care home “Las Memorias” (73.4% on antiretroviral therapy; Years since HIV diagnosis: Mean=9.9, SD=7.1). Participants completed a neuropsychological test battery sensitive to detect HIV associated NCI that assessed four cognitive domains (verbal fluency, speed of information processing, executive function and learning/memory). Raw scores in these tests were transformed to percentiles using LAPN and transformed to T-scores using NP-NUMBRS. T-scores were averaged across tests to compute domain specific and global impairment scores. NCI was defined as percentile scores <16 and T-scores < 40. McNemar’s tests were used to compare the rate of NCI utilizing NLAP vs NP-NUMBRS.
Results:
According to NLAP, rates of global NCI were about 13.4%. Utilizing NP-NUMBRS rates of global NCI were about 34.1%. However, there is a positive and significant correlation between Global Neurocognitive Function score in PLWHA according to NLAP and NP-NUMBRS (r=0.66, p<.05). Rates of global NCI in PLWHA were significantly lower when using LAP norms (McNemar Chi-Square=29.89; p<.001). Regarding the pattern of NCI according both norms learning and memory was the most affected cognitive domain with 34% of impairment according to NLAP vs 51% of impairment according to NP-NUMBRs.
Conclusions:
Utilizing NP-NUMBRS, rates of NCI are consistent with findings of prior studies. Employing norms for LAP the rates of NCI are lower that the ones reported in the literature. This is an important finding since PLWHA included in the sample have several vulnerable factors such as deportation, prostitution, drug abuse and discrimination for sexual preference, factor that could impact cognition. The pattern of neurocognitive function was also similar to those of prior studies in HIV. To accurately make NCI diagnosis it is important to use norms that consider specific characteristics of the population. The diagnosis of NCI is important since these deficits present a strong risk of concurrent problems in a wide range of health behaviors like medication non-adherence in PLWHA.
Modifying risk factors by using effective cognitive strategies across the life-course may prevent or delay up to 40% of dementias through enhancing reserve/resilience. Reserve/Resilience is an emerging concept and refers to the ability of the brain to cope with neuropathology and neurodegeneration. Emerging evidence suggests that bi/multilingualism is associated with cognitive advantages and improves resilience against dementia, stroke and other cognitive disorders. Seven thousand languages are spoken across the world and speaking a second/third or more languages is a natural phenomenon. Further, with globalization, societies are becoming increasingly linguistically diverse and half of the world’s population is bi/multilingual. Exploring beneficial effects of bi/multilingualism will have an impact on dementia risk reduction and recovery from brain injury. Bi/Multilingualism has been demonstrated to delay age at onset of dementia and also improve cognitive and language recovery after stroke. Advantages to executive function are thought to underlie its beneficial effects. Cortical morphometric, white matter connectivity and functional brain changes in bilinguals represent the neural basis for its effect on cognitive reserve/resilience. In this presentation, insights from studies that have explored the role of bi/multilingualism in impacting cognitive resilience against dementia and stroke will be discussed in the context of global research.
Upon conclusion of this course, learners will be able to:
1. Describe the impact of bilingualism on age at onset and cognitive manifestations of dementia and stroke
2. Discuss the mechanisms that underlie the potentially protective effects of bilingualism in dementia and stroke
3. Describe the role of bi/multilingualism on cognitive reserve/resilience in disorders of the brain
In the United States, Alzheimer’s disease (AD) is the most common cause of dementia and the seventh leading cause of death. Exercise has demonstrated health benefits in older adults and reduces the risk of developing AD. Exploring underlying biological mechanisms of exercise could aid in identifying therapeutic targets to prevent AD progression, especially for high-risk individuals such as those with Mild Cognitive Impairment (MCI). Many studies of dementia focus on memory; however, executive function and processing speed are also vulnerable to the neuropathology that causes AD. This exploratory study aimed to identify potential mechanisms by which physical activity can facilitate change in cognitive functioning in older adults. This was accomplished by investigating correlations between changes in neurology-related plasma proteins and changes in measures of executive function and processing speed after participation in a water-based exercise intervention.
Participants and Methods:
The sample included 20 older adults with amnestic MCI, ages 55-82 years (mean 68.15 ±7.75). Participants were predominately male (90%), White (70%), and non-Hispanic (85%), with more than high school education (95%). Participants engaged in supervised high-intensity water-based exercise three times per week for six months. Neuropsychological assessments and blood samples were assessed at baseline and after completion of the exercise intervention. Cognitive measures included: the Digit Span subtests from the Wechsler Adult Intelligence Scale, 4th Edition, Trail-Making Test (TMT), Stroop Color Word Test (SCWT), and the Symbol Digit Modalities Test (SDMT). Plasma protein levels were analyzed using the Olink Target 96 Neurology assay (Uppsala, Sweden), selected a priori for the established markers linked to neurobiological processes and diseases. Changes in cognitive measures and protein levels were assessed using paired-sample t-tests, and Pearson’s correlations were calculated for significant findings.
Results:
Participants’ cognitive performance significantly improved on the SCWT color trial (t = -2.19, p = 0.042) and SDMT (t = -2.17, p = .043). Significant decreases in plasma proteins levels were found for GDNF family receptor alpha-1 ([GFRA1]: t =2.05, p = 0.055), neuroblastoma suppressor tumorigenicity-1 ([NBL1]: t = 2.13, p= .046), and neuropilin-2 ([NRP2]: t = 2.61 p= 0.017). Correlational analyses showed reductions in NBL1 were significantly associated with changes in both SDMT (r = -.61, p = 0.006) and the color trial of SCWT (r = .48, p = .038), and NRP2 was significantly associated with improvement on the SDMT (r = -.46, p = 0.045). GFRA1 was not significantly associated with change on any cognitive measure.
Conclusions:
In a sample of older adults with MCI, participation in high-intensity water-based exercise led to significant improvements in cognitive function as well as changes in neurological plasma proteome. Improved outcomes in processing speed, attention, visuospatial scanning, and working memory were associated with changes in specific plasma protein concentrations. This highlights potential activity-dependent neurobiological mechanisms that may underlie the cognitive benefits derived from physical activity. Future studies should explore these findings in Randomized Control Trials with a comparative condition and larger sample size.
Sensitive and non-invasive methods of screening for early-stage Alzheimer’s disease (AD) are urgently needed. The digital clock drawing test (DCTclockTM) is an established and well-researched neuropsychological tool that can aid in early detection of dementia. Other simple, yet sensitive, neuropsychological measures able to detect early stages of AD include Trail Making Tests (TMT). We investigated the psychometric properties of DCTclockTM with TMT-A and TMT-B. We then sought to understand the degree to which neuropsychological tools (i.e., DCTclockTM, TMT-A, and B) versus the Montreal Cognitive Assessment (MoCA) predict beta-amyloid (Aß) positron emission tomography (PET) status (positive or negative) in cognitively normal individuals.
Participants and Methods:
Participants included a sample of cognitively normal older adults (n= 59, M age = 69.2, F = 64%) recruited from the Butler Memory and Aging Program. The Linus Health DCTclockTM uses a digital pen to capture traditional clock drawing test performance and advanced analytics to evaluate the drawing process for indicators of cognitive difficulty. DCTclockTM may have overlapping cognitive properties with TMT measures, like efficiency, processing speed, and spatial reasoning. We compared latency measures (i.e., process efficiency, clock face speed, average latency, and processing speed) and spatial reasoning of the DCTclockTM to z-scores of TMT-A and TMT-B to detect any overlapping psychometric properties. Verbal fluency was included for discriminant validity. We then ran logistic regressions on a subset of the sample to compare neuropsychological tests (DCTclockTM total score [score that captures overall performance], TMT-A/B, and verbal fluency) to the MoCA, a commonly used cognitive screening tool, in determining PET status.
Results:
Highly correlated (r > .7) DCTclockTM variables were excluded. We found statistically significant correlations between some DCTclockTM measures and TMT-A/B, like DCTclockTM drawing process efficiency and TMT-A and TMT-B (r= .45, p< .001, r=.29, p< .026, respectively), and DCTclockTM average latency and TMT-A and TMT-B (r=.3, p< .024, r= .26, p< .044, respectively). No statistically significant associations were found between any DCTclockTM measures and verbal fluency, or between DCTclockTM spatial reasoning and TMT-A/B. We then investigated the effect of these neuropsychological tests (DCTclockTM total score, TMT-A/B, verbal fluency) and age on the likelihood of PET positivity (subset of sample, total PET, n=31). The model was statistically significant (x2 (5) = 15.35, p< .01). The model explained 53% (Nagelkerke R2) of the variance in PET status and correctly classified 74.2% of cases. DCTclockTM was the only significant predictor (p< .02), after controlling for TMT-A, TMT-B, verbal fluency, and age. Comparatively, there was no effect of MoCA and age (total PET, n= 29) on the likelihood of PET positivity.
Conclusions:
Overall, these results suggest psychometric convergence on elements of DCTclockTM and TMT-A/B, while there was no association in spatial operations between DCTclockTM and TMT measures. Further, when compared to the MoCA, DCTclockTM and these commonly used neuropsychological tests (verbal fluency and TMT-A/B) were better predictors of PET status, primarily driven by the DCTclockTM. Digitized neuropsychological tools may provide additional metrics not captured by pen-and-paper tests that can detect AD-associated pathology.
Approximately 73% of the United States (US) population on public water systems receives fluoridated water for tooth decay prevention. In Los Angeles (LA) County, 89% of cities are at least partially fluoridated. Drinking water is the primary source of fluoride exposure in the US. Studies conducted in Mexico and Canada suggest that prenatal fluoride exposure, at levels relevant to the US, may contribute to poorer neurodevelopment in offspring. However, data on biomarkers and patterns of fluoride exposure among US pregnant women are scarce. This study examined urinary fluoride levels according to sociodemographic factors and metal co-exposures among pregnant women in the US.
Participants and Methods:
Participants were from the Maternal and Developmental Risks from Environmental and Social Stressors (MADRES) cohort based in Los Angeles, California. There were 293 and 490 women with urine fluoride measured during the first and third trimesters of pregnancy, respectively. An intra-class correlation coefficient examined consistency of specific gravity-adjusted maternal urinary fluoride (MUFsg) between trimesters. Kruskal-Wallis and Mann-Whitney U tests examined associations of MUFsg with sociodemographic variables. Spearman correlations examined associations of MUFsg with blood and urine metals within and between trimesters. A False Discovery Rate (FDR) correction accounted for multiple comparisons. The criterion for statistical significance was an alpha of 0.05.
Results:
Participants were approximately 29 years old on average, and 80% were Hispanic or Latina. Median (IQR) MUFsg during trimesters one and three was 0.65 (0.5) mg/L and 0.8 (0.59) mg/L, respectively. MUFsg levels were moderately consistent between trimesters (N=292, ICC = 0.46, 95%CI: 0.32,0.57). Maternal age was positively associated with MUFsg during first (p = 0.16, p = 0.006) and third (p = 0.18, p < 0.001) trimesters. MUFsg differed by race/ethnicity during first and third trimesters (N = 293, H (3) = 7.99, p = 0.046; N = 486, H (3) = 25.31, p < 0.001, respectively). Specifically, MUFsg was higher for White, Non-Hispanic participants (first trimester Median (IQR) =1.03 (1.31) mg/L; third trimester Median (IQR) = 1.32 (1.24) mg/L) than for Hispanic participants in both trimesters (first trimester Median (IQR) =0.64 (0.48) mg/L; third trimester Median (IQR) = 0.76 (0.55) mg/L). Additionally, during trimester three, MUFsg was higher for White, Non-Hispanic participants than for Black Non-Hispanic participants (Median (IQR) = 0.82 (0.49) mg/L). MUFsg also differed by education during trimester one (N = 293, H (4) = 10.61, p = 0.031), and was higher for participants with some graduate training than for those with high school or some college/technical school education (ps = 0.03 and 0.04, respectively). After FDR correction, MUFsg was associated with blood lead (N =91, p = 0.29, p = 0.024) and urinary cadmium (N =279, p = 0.19, p = 0.042), copper (N=279, p = 0.16, p = 0.042), and tungsten (N=279, p = 0.16, p = 0.049) during trimester three.
Conclusions:
Consistent with studies conducted in Canada and Mexico, MUFsg increased across pregnancy. Lower MUFsg among Hispanic and Non-Hispanic Black participants may reflect lower tap water consumption. Metal co-exposures are important to consider when examining potential neurodevelopmental impacts of fluoride.
Functional near infrared spectroscopy (fNIRS) is a form of non-invasive neuroimaging that uses light to measure changes in oxygenated and deoxygenated hemoglobin (Yucel et. al., 2021). Relative to fMRI, fNIRS is significantly cheaper and less susceptible to motion artifacts thereby enabling researchers to acquire data in more ecologically valid environments and has a higher temporal resolution that makes it well-suited for connectivity analyses (Tak and Ye, 2014). fNIRS is, however, uniquely limited by cortical anatomy. With a typical probe array having a penetrance depth of up to 3cm, the benefits of fNIRS may be limited by the neocortical atrophy that is characteristic in those with neurodegeneration. We present preliminary findings comparing fNIRS probe sensitivity in older adults diagnosed with posterior cortical atrophy (PCA) relative to cognitively intact older adults using Monte Carlo (MC) simulations. MC simulations offer probabilistic models that simulate photon movement through tissues of interest. We were particularly interested in fNIRS' sensitivity in the occipitoparietal cortices since these are regions characteristically affected in PCA.
Participants and Methods:
We acquired high resolution structural (T1) MRI on 3 cognitively intact older adults and 3 individuals who received a clinical diagnosis of PCA according to Crutch et al. (2017) criteria. Individual T1 scans were preprocessed and transformed into a twodimensional (2D) surface using FreeSurfer. This continuous 2D surface was then segmented into its main tissue priors, as well as its pial surfaces. Segmented MRIs were then imported into AtlasViewer software and registered to our full head fNIRS probe array via an affine transformation. We embedded the GPU-accelerated Monte Carlo Extreme 3D light transport simulator software (Fang and Boas, 2009) into AtlasViewer which enabled us to launch 10 million photons from each optode, compared to the 1 million that AtlasViewer is set to by default, thereby providing more accurate results (Aasted et. al., 2015). We then assessed the sensitivity profile (log units), a mathematical estimate of optical density, of the inferior and superior occipital gyri, middle occipital gyrus, the superior and inferior parietal lobules, and left and right precunei.
Results:
Among the regions interrogated, five channels on our fNIRS probe were markedly different between the controls and those with PCA. Specifically, sensitivity values for channels covering the right inferior (hedges g = 8.04) and left superior occipital gyrus (hedges g = 2.46), the right inferior parietal lobule (hedges g = 8.89), and the right (hedges g = 9.43) and left (hedges g = 14.83) precunei were all markedly lower in those with PCA.
Conclusions:
We provided preliminary evidence that the sensitivity of fNIRS appears to be markedly reduced in those with PCA. This is especially relevant for researchers using fNIRS in populations with neurodegeneration. Future work will evaluate these findings in a larger sample as well as in other neurologic conditions with the goal of helping researchers appropriately power their studies and interpret their results.
Modifiable dementia risk factors such as depression, cardiovascular disease and physical and cognitive activity account for 40-50% of dementia risk and their association with neuropsychological performance is evident in both preclinical and prodromal dementia stages. Over the course of her career, Professor Naismith has examined how modifiable risk factors relate to various aspects of cognition and brain degeneration and how best to treat them. She has led the development of cognitive training programs and clinical trials targeting these risk factors. She has authored more than 350 papers across a range of fields largely focused on cognition but also utilising neuroimaging, genetics, e-health, data syntheses, as well as clinical trials and health services. Her most recent work focuses on how sleep and circadian disturbance is linked to cognitive decline, how best to treat sleep disturbance in older people and how to utilise new digital sleep technologies to derive maximal reach and scale within the rapidly rising ageing population.
In this presentation, the evolution of her program of work over time will be considered with respect to core discipline-specific foundations but also amidst the changing research landscape, research challenges and the need to optimise health impact. The importance of multidisciplinarity, career mentors and partners, capacity building, and engaging with government and policy makers will be discussed as well as other factors considered to be key to mid-career research success.
As the global population of older adults increases, it is crucial to study the healthy aging brain. Despite representing approximately 50% of brain tissue, investigations of changes in white matter (WM) have been limited. Given that women outlive men in most populations worldwide, evaluating factors such as sex and gender in the normal aging trajectory are particularly important. However, past research has been limited by varying definitions of these terms and methodological challenges. Further, limited studies have employed longitudinal designs. The objective of the present study was to 1) compare sex similarities and differences in WM microstructure, and 2) investigate longitudinal changes in WM in healthy older adults. The Parkinson’s Progression Markers Initiative (PPMI) is an ongoing observational longitudinal study designed to investigate biomarkers related to Parkinson’s disease. For up-to-date information, please see: https://www.ppmi-info.org/. The PPMI study presents a convenient opportunity to investigate the expected aging trajectory among healthy older adults by using data from its healthy control cohort.
Participants and Methods:
Participants (N=40) included 16 females (mean age = 60.50 + 5.99) and 24 males (mean age = 65.50 + 7.53) from the healthy control cohort of the PPMI. Diffusion tensor imaging (DTI) data from two time points (baseline and approximately one year later) were analyzed using tract-based spatial statistics from the FMRIB Software Library (FSL). Diffusion weighted images were acquired with a Siemens 3T TIM Trio scanner with a 12 channel Matrix head coil. All images were acquired with a spin echo, echo planar imaging sequence with 64 gradient directions and a b-value of 1000s/mm2 with a voxel size of 2 mm3. Two analyses were conducted: 1) between-groups, comparing differences in WM microstructure between males and females at baseline while controlling for age and total brain volume (TBV), and 2) within-subject, examining longitudinal changes in WM from baseline to one year later. DTI metrics included fractional anisotropy (FA) and mean diffusivity (MD).
Results:
Males were significantly older than females and had significantly larger TBVs. Results of voxelwise comparisons revealed no statistically significant differences in FA or MD between males and females when controlling for age and TBV. Longitudinally over one year, decreases in MD (p<.05, corrected) were found in the right superior and inferior longitudinal fasciculus, the right corticospinal tract, and the right inferior fronto-occipital fasciculus. Stability in FA was observed over one year. There was also an average of a one-point decline on the Montreal Cognitive Assessment during the study period of one year.
Conclusions:
No significant sex differences in WM microstructure were found, which agrees with a published review of the literature that men and women show very similar brain structure after accounting for brain size differences. Across the entire sample, longitudinal changes in WM were captured via neuroimaging across a one-year time frame. Follow-up exploration of these data suggests great intraindividual variability in trajectories over time, which may have affected the overall group trajectory. Continued research of factors that contribute to the identifying individual healthy aging trajectories is warranted.
Daily driving behavior is ultimate measure of cognitive functioning requiring multiple cognitive domains working synergistically to complete this complex instrumental activity of daily living. As the world’s population continues to grow and age older, motor vehicle crashes become more frequent. Cognitive and brain reserve are developing constructs that are frequently assessed in aging research. Cognitive reserve preserves functioning in the face of greater loss of brain structure as experienced during cognitive impairment or dementia. This study determined whether cognitive reserve and brain reserve predict changes in adverse driving behaviors in cognitively normal older adults.
Participants and Methods:
Cognitively normal participants (Clinical Dementia Rating 0) were enrolled from longitudinal studies at the Knight Alzheimer’s Disease Research Center at Washington University. Participants (n=186) were ≥ 65 years of age, required to have Magnetic Resonance Imaging (MRI) data, neuropsychological testing data, as well as one full year of naturalistic driving data prior to the beginning of COVID-19 lockdown in the United States (March 2020). Naturalistic driving behavior data was collected via the Driving Real World In-vehicle Evaluation System (DRIVES). DRIVES variables included idle time, over speeding, aggression, number of trips, including those at day and night. MRI was performed on 3T Tesla using a research imaging protocol based upon ADNI that includes a high-resolution T1 MPRAGE for assessment of brain structures to produce normalized whole brain volume (WBV) and hippocampal volume (HV). WBV and HV were each assessed using tertiles comparing the top 66% with the bottom 33% where the bottom represented increased atrophy. The Word Reading subtest of the Wide Range Achievement Test 4 (WRAT 4) was utilized as a proxy for cognitive reserve. WRAT 4 scores were compared with the top 66% and the bottom 33% where the bottom were poor performers. Linear-mixed-effect models adjusted for age, education, and sex.
Results:
Participants on average were older (73.7±4.9), college educated (16.6±2.2), and similar sex distribution (males=100, females=86). Analyses showed statistically significant differences in slopes where participants with increased hippocampal and whole brain atrophy were less likely to overspeed (p=0.0035; p=0.0003), drive aggressively (p=0.0016; p<0.0001), and drive during the daytime (p<0.0001; p<0.0001). However, they were more likely to spend more time idling (p=0.0005; p<0.0001) and drive during the nighttime (p=0.003; p=0.0002). Similar findings occurred with the WRAT 4 where participants with lower scores were less likely to overspeed (p=0.0035), drive aggressively (p=0.0024), hard brake (p=0.0180), and drive during the daytime (p<0.0001) while they were more likely to also spend more time idling (p=0.0012) and drive during the nighttime (p=0.0004).
Conclusions:
Numerous changes in driving behaviors over time were predicted by increased hippocampal and whole brain atrophy as well as lower cognitive reserve scores proxied by the WRAT 4. These changes show that those with lower brain and cognitive reserve are more likely to restrict their driving behavior and adapt their daily behaviors as they age. These results suggest older adults with lower brain and cognitive reserve are more likely to avoid highways where speeding and aggressive maneuvers are more frequent.
Olfactory dysfunction can influence nutritional intake, the detection of environmental hazards, and quality of life. Prior research has found discordance between subjective and objective measures of olfaction. In people living with HIV (PLWH), olfactory dysfunction is widely reported; however, few studies have examined concordance between subjective olfactory self-ratings and performance on an objective psychophysical measure of olfaction and associated factors in men living with HIV (MLWH).
Participants and Methods:
MLWH (n=51, mean age=54 years, 66.7% Black) completed two subjective olfaction ratings (two 5-point Likert scales), the Smell Identification Test (SIT), cognitive measures (HVLT-R, TMT), and self-report questionnaires assessing smell habits, mood, cognitive failures, and quality of life. Participants were categorized into one of four groups: true positives (TP; impaired subjective olfaction and objective olfaction dysfunction), false negatives (FN; intact subjective olfaction and objective olfaction dysfunction), false positives (FP; impaired subjective olfaction and objective normosmia), and true negatives (TN; intact subjective olfaction and normosmia). Established formulas were used to calculate the sensitivity and specificity of subjective olfaction, and t-tests and ANOVA were used to examine potential demographic, clinical, and cognitive factors contributing to discordance between subjective and objective olfaction dysfunction.
Results:
Across both subjective self-report items, 35.3% reported olfactory dysfunction, whereas 60.8% had objective olfaction dysfunction on the SIT (score < 33). Black MLWH had significantly higher rates of subjective (Black 41.2% vs. White 35.3%) and objective (Black 73.5% vs. White 35.3%) olfactory dysfunction (X2(1)=9.22, p=.002). We found discordance between subjective and objective olfaction measures, with 29.4% of the sample having objective olfaction dysfunction and not recognizing it (FN). In comparison, 3.9% with self-rated olfaction impairment had normal objective olfaction scores (FP). Additionally, there was concordance in subjective self-reports compared with objective olfaction, with 35.3% correctly identifying normal olfaction (TN) and 31.4% correctly identifying olfactory dysfunction (TP). Those unaware of olfaction dysfunction (FN) reported using less scented products in daily life on the Smell Habits Questionnaire. Although the FN group had faster TMT scores, these findings were no longer significant after the removal of three outliers in the TP group (e.g., time to complete greater than 350 seconds).
Conclusions:
Our findings cohere with work in healthy older adults, traumatic brain injury, and Parkinson’s disease, documenting that subjective olfaction may inadequately capture the full range of a person’s olfactory status. We extend these findings to a sample of MLWH, in which discordance rates ranged from 35-61% for subjective and objective olfactory dysfunction. Unawareness of olfactory dysfunction in MLWH was associated with less daily smell habits and paradoxically faster TMT performance. A higher number of smell habits in the TP group indicate that more frequent odor exposure may increase sensitivity to olfactory declines. Future studies with larger samples will be helpful in understanding the full nature of these relationships. Lastly, given that one-third of the sample had discordance between subjective and objective olfaction, objective olfaction measures may be useful to consider in the neuropsychological assessment and standard clinical care for PLWH.
The present study aims to better understand learning strategies and difficulties in autistic youth. Previous studies have found that autistic youth have difficulties with executive function skills and poorer performance in memory and learning tasks, especially those that require spontaneous retrieval of information compared to memory tasks that provide external retrieval cues. Additionally, it has been theorized that autistic youth employ a serial approach rather than a semantic approach to learning. The current study hypothesized that the autistic sample will have (a) significant difficulties in learning and memory, (b) employ a serial approach more frequently and a semantic approach less frequently than the CVLT normative sample, and (c) will benefit significantly when provided with external retrieval cues.
Participants and Methods:
Archival data from a mixed clinical and research database were examined for this study. Participants include 740 autistic individuals between the ages of 5.50 and 24.3 (M = 10.90, SD = 2.98). The sample consisted of 22.2% girls and 34.0% Black, Indigenous, and people of color (BIPOC). All individuals had a FSIQ > 70 (M = 99.91, SD = 16.09) and were clinically diagnosed with autism using DSM-IV-TR or DSM-V criteria by a clinician at an autism diagnostic center. Participants completed the age-appropriate California Verbal Learning Test (CVLT, Delis et al. 1987) which is a neuropsychological measure that examines verbal memory and learning. One-sample t-tests were used to examine the sample's verbal memory abilities and their learning strategies. A paired sample t-test was used to evaluate the sample's performance before and after an external retrieval cue was given.
Results:
Results from the one-sample t-tests indicate that the autistic sample performed worse than the CVLT normative data with a large effect size (t(739)= -9.440, p <.001, Cohen's d = 1.292). Secondly, the autistic sample was less likely to use a semantic learning approach (t(739) = -1.841, p = .033, Cohen's d = 1.234), but not more likely to use a serial approach (t(739)=-.040, p=.484) compared to the normative sample. Lastly, the paired sample t-test results show that the sample performed significantly better after receiving the external retrieval cue (t(739)=-2.570, p=.005, Cohen's d = .770).
Conclusions:
The data supported the first hypothesis; autistic individuals have increased difficulties with learning and verbal memory. However, the data only partially support the second hypothesis. The sample was less likely to use a semantic approach to learning but was not more likely to use serial learning. This finding is opposed to the Weak Central Coherence (WCC) theory, which suggests that autistic individuals are more likely to have detail-oriented, bottom-up cognitive thinking styles, consistent with a serial learning strategy. Lastly, data showed improvement when autistic individuals received a retrieval cue. This result supports the Task Support Hypothesis (TSH) and indicates that autistic individuals benefit from cues for memory recall, particularly those that capitalize on their areas of strength. This study did not use a control group and is limited in ethno-racial diversity; therefore, these are preliminarily findings that require further replication.
The ability to generate, plan for, and follow through with goals is essential to everyday functioning. Compared to young adults, cognitively normal older adults have more difficulty on a variety of cognitive functions that contribute to goal setting and follow through. However, how these age-related cognitive differences impact real-world goal planning and success remains unclear. In the current study, we aimed to better understand the impact of older age on everyday goal planning and success.
Participants and Methods:
Cognitively normal young adults (18-35 years, n= 57) and older adults (60-80 years, n= 49) participated in a 10-day 2-session study. In the first session, participants described 4 real-world goals that they hoped to pursue in the next 10 days. These goals were subjectively rated for personal significance, significance to others, and vividness, and goal descriptions were objectively scored for temporal, spatial, and event specificity, among other measures. Ten days later, participants rated the degree to which they planned for and made progress in their real-world goals since session one. Older adults also completed a battery of neuropsychological tests.
Results:
Some key results are as follows. Relative to the young adults, cognitively normal older adults described real-world goals which navigated smaller spaces (p=0.01) and that they perceived as more important to other people (p=0.03). Older adults also planned more during the 10-day window (p<0.001). There was not a statistically significant age group difference, however, in real-world goal progress (p=0.65). Nonetheless, among older participants, goal progress was related to higher mental processing speed as shown by the Trail Making Test Part A (r=0.36, p=0.02) and the creation of goals confined to specific temporal periods (r=0.35, p=0.01). Older participants who scored lower on the Rey Complex Figure Test (RCFT) long delay recall trial reported that their goals were more like ones that they had set in the past (r= -0.34, p=0.02), and higher episodic memory as shown by the RCFT was associated with more spatially specific goals (r=0.32, p=0.02), as well as a greater use of implementation intentions in goal descriptions(r=0.35, p=0.02).
Conclusions:
Although older adults tend to show decline in several cognitive domains relevant to goal setting, we found that cognitively normal older adults did not make significantly less progress toward a series of real-world goals over a 10-day window. However, relative to young adults, older adults tended to pursue goals which were more important to others, as well as goals that involved navigating smaller spaces. Older adults also appear to rely on planning more than young adults to make progress toward their goals. These findings reveal age group differences in the quality of goals and individual differences in goal success among older adults. They are also in line with prior research suggesting that cognitive aging effects may be more subtle in real-world contexts.
Fetal alcohol spectrum disorder (FASD) is a common neurodevelopmental condition associated with deficits in cognitive functioning (executive functioning [EF], attention, working memory, etc.), behavioral impairments, and abnormalities in brain structure including cortical and subcortical volumes. Rates of comorbid attention-deficit/hyperactivity disorder (ADHD) are high in children with FASD and contribute to significant functional impairments. Sluggish cognitive tempo (SCT) includes a cluster of symptoms (e.g. underactive/slow-moving, confusion, fogginess, daydreaming) found to be related to but distinct from ADHD, and previous research suggests that it may be common in FASD. We explored SCT by examining the relationship between SCT and both brain volumes (corpus callosum, caudate, and hippocampus) and objective EF measures in children with FASD vs. typically developing controls.
Participants and Methods:
This is a secondary analysis of a larger longitudinal CIFASD study that consisted of 35 children with prenatal alcohol exposure (PAE) and 30 controls between the ages of 9 to 18 at follow-up. Children completed a set of cognitive assessments (WISC-IV, DKEFS, & NIH Toolbox) and an MRI scan, while parents completed the Child Behavior Checklist (CBCL), which includes a SCT scale. We examined group differences between PAE and controls in relation to SCT symptoms, EF scores, and subcortical volumes. Then, we performed within-and between-group comparisons with and without controlling for total intracranial volume, age, attention problems, and ADHD problems between SCT and subcortical brain volumes. Finally, we performed correlations between SCT and EF measures for both groups.
Results:
Compared to controls, participants with PAE showed significantly more SCT symptoms on the CBCL (t [57] = 3.66, p = 0.0006), more parent-rated attention problems and ADHD symptoms, lower scores across several EF measures (DKEFS Trail-Making and Verbal Fluency; WISC-IV Digit Span, Symbol Search, and Coding; effect sizes ranging from 0.44 to 1.16), and smaller regional volumes in the caudate, hippocampus, and posterior areas of the corpus callosum. In the PAE group, a smaller hippocampus was associated with more SCT symptoms (controlling for parent-rated attention problems and ADHD problems, age, and intracranial volume). However, in the control group, a larger mid posterior and posterior corpus callosum were significantly associated with more SCT symptoms (controlling for parent-rated attention problems, intracranial volume, and age; r [24] = 0.499, p = 0.009; r [24] = 0.517, p = 0.007). In terms of executive functioning, children in the PAE group with more SCT symptoms performed worse on letter sequencing of the Trail-Making subtest (controlling attention problems & ADHD symptoms). In comparison, those in the control group with more SCT symptoms performed better on letter sequencing and combined number letter sequencing of the Trail-Making subtest (controlling attention problems).
Conclusions:
Findings suggest that children with FASD experience elevated SCT symptoms compared to typically developing controls, which may be associated with worse performance on EF tasks and smaller subcortical volumes (hippocampus) when taking attention difficulties and ADHD symptoms into account. Additional research into the underlying causes and correlates of SCT in FASD could result in improved tailoring of interventions for this population.
Rapid neurodevelopment occurs during adolescence, which may increase the developing brain’s susceptibility to environmental risk and resilience factors. Adverse childhood experiences (ACEs) may confer additional risk to the developing brain, where ACEs have been linked with alterations in BOLD signaling in brain regions underlying inhibitory control. Potential resiliency factors, like a positive family environment, may attenuate the risk associated with ACEs, but limited research has examined potential buffers to adversity’s impact on the developing brain. The current study aimed to examine how ACEs relate to BOLD response during successful inhibition on the Stop Signal Task (SST) in regions underlying inhibitory control from late childhood to early adolescence and will assess whether aspects of the family environment moderate this relationship.
Participants and Methods:
Participants (N= 9,080; Mage= 10.7, range= 9-13.8 years old; 48.5% female, 70.1% non-Hispanic White) were drawn from the larger Adolescent Brain Cognitive Development (ABCD) Study cohort. ACE risk scores were created (by EAS) using parent and child reports of youth’s exposure to adverse experiences collected at baseline to 2-year follow-up. For family environment, levels of family conflict were assessed based on youth reports on the Family Environment Scale at baseline and 2-year follow-up. The SST, a task-based fMRI paradigm, was used to measure inhibitory control (contrast: correct stop > correct go); the task was administered at baseline and 2-year follow-up. Participants were excluded if flagged for poor task performance. ROIs included left and right dorsolateral prefrontal cortex, anterior cingulate cortex, anterior insula, inferior frontal gyrus (IFG), and pre-supplementary motor area (pre-SMA). Separate linear mixed-effects models were conducted to assess the relationship between ACEs and BOLD signaling in ROIs while controlling for demographics (age, sex assigned at birth, race, ethnicity, household income, parental education), internalizing scores, and random effects of subject and MRI model.
Results:
Greater ACEs was associated with reduced BOLD response in the opercular region of the right IFG (b= -0.002, p= .02) and left (b= -0.002, p= .01) and right pre-SMA (b= -0.002, p= .01). Family conflict was related to altered activation patterns in the left pre-SMA, where youth with lower family conflict demonstrated a more robust negative relationship (b=.001, p= .04). ACEs were not a significant predictor in other ROIs, and the relationship between ACEs and BOLD response did not significantly differ across time. Follow-up brain-behavior correlations showed that in youth with lower ACEs, there was a negative correlation between increased activation in the pre-SMA and less impulsive behaviors.
Conclusions:
Preadolescents with ACE history show blunted activation in regions underlying inhibitory control, which may increase the risk for future poorer inhibitory control with downstream implications for behavioral/health outcomes. Further, results demonstrate preliminary evidence for the family environment’s contributions to brain health. Future work is needed to examine other resiliency factors that may modulate the impact of ACE exposure during childhood and adolescence. Further, clinical scientists should continue to examine the relationship between ACEs and neural and behavioral correlates of inhibitory control across adolescent development, as risk-taking behaviors progress.
The research examining the influence of bilingualism on cognition continues to grow. Past research shows that monolingual speakers outperformed bilingual speakers on language, memory, and attention and processing speed tasks. However, the opposite has been found favoring bilingual speakers, when comparing executive functioning abilities. Furthermore, researchers have reported that no differences in executive functioning abilities exist between young adult monolingual speakers compared to young adult bilingual speakers. Moreover, limited research exists examining cognition abilities between monolinguals, bilinguals that learn a language (e.g., English) first, and bilinguals that learn the same language (e.g., English) second. We examined young adult monolinguals cognition abilities (e.g., memory) compared to young adult bilinguals that learned English as a first or second language. It was expected that the monolingual group would outperform both bilingual groups on memory, language, and attention and processing tasks, but no differences would be found on executive functioning tasks.
Participants and Methods:
The sample consisted of 149 right-handed undergraduate students with a mean age of 19.58 (SD = 1.90). Participants were neurologically and psychologically healthy and divided into three language groups: English first language (EFL) monolingual speakers, EFL bilingual speakers, and English second language (ESL) bilingual speakers. All the participants completed a background questionnaire and comprehensive neuropsychological battery that included memory, language, executive functioning, and attention and processing speed tasks in English. A series of ANOVA’s were used to evaluate cognitive tasks (e.g., Boston Naming Test, Trail Making Test) between the language groups. Participants demonstrated adequate effort on one performance validity test.
Results:
Language groups were well demographically matched. We found the EFL monolingual group outperformed the ESL bilingual group on the Wide Range Achievement Test, fourth edition task and the Controlled Oral Word Association Test (COWAT) phonemic task, p’s < .05, np’s2 = .04-.05. Additionally, results revealed both monolingual groups outperformed the ESL bilingual group on the Wechsler Adult Intelligence Scale, Third edition vocabulary task and the Boston Naming Test, p’s < .05, np’s2 = .06-.15. No significant differences were found on any of the cognitive tasks between the EFL monolingual group and the EFL bilingual group.
Conclusions:
As expected, the ESL bilingual group performed worse on language tasks compared to both monolingual groups, specifically the EFL monolingual group. However, in the opposite direction, we found the EFL monolingual demonstrated better phonemic verbal fluency abilities on the COWAT compared to the ESL bilingual group. The current data suggest that bilingualism influences cognitive abilities (e.g., language, executive functioning) more ESL bilingual speakers compared to EFL monolingual speakers. A possible explanation may be due to the type of interaction that ESL bilingual speakers may prefer to have (i.e., mix language conversations) compared to EFL speaking groups. Future studies with a larger bilingual speaking sample should investigate if the Adaptive Control Hypothesis which suggest that different types of conversations may be placing different demands of language control influences cognitive abilities.
In normative aging, there is a decline in associative memory that appears to relate to self-reported everyday use of general memory strategies (Guerrero et al., 2021). Self-reported general strategy use is also strongly associated with self-reported memory abilities (Frankenmolen et al., 2017), which, in turn, are weakly associated with objective memory performance (Crumley et al., 2014). Associative memory abilities and strategy use appear to differ by gender, with women outperforming men and using more memory strategies (Hertzog et al., 2019). In this study, we examine how actual performance and self-reported use of specific strategies on an associative memory task relate to each other and to general, everyday strategy use, and whether these differ by gender.
Participants and Methods:
An international sample of older adults (N = 566, 53% female, aged 60-80) were administered a demographic questionnaire and online tasks, including 1. the Multifactorial Memory Questionnaire (MMQ) which measures self-reported memory ability, satisfaction, and everyday strategy use (Troyer & Rich, 2018); and 2. the Face-Name Task which measures associative memory (Troyer et al., 2011). Participants were also asked about specific strategies that were used to complete the Face-Name Task.
Results:
On the Face-Name Task, participants who reported using more strategies performed better (F(3, 562) = 6.51, p < 0.001, n2 = 0.03), with those who reported using three or four strategies performing best (p < .05). There was a significant difference in performance based on the type of strategy used (p(2, 563) = 11.36, p < 0.001, n2 = 0.04), with individuals who relied on a “past experiences/knowledge” strategy performing best (p < .01). Women (M = 0.79, SD = 0.19) outperformed men (M = 0.71, SD = 0.20), f(545) = -4.64, p < 0.001, d = -0.39. No gender differences were found in the number (X2(3, N = 564) = 2.06, p = 0.561) or type (x2(2, N = 564) = 5.49, p = 0.064) of strategies used on the Face-Name Task. Only participants who reported using no strategies on the Face-Name Task had lower scores on the MMQ everyday strategy use subscale (p < .05). A multiple-regression model was used to investigate the relative contributions of the number of strategies used on the Face-Name Task, MMQ everyday strategy subscale score, gender, age, education, and psychological distress to Face-Name Task performance. The only significant predictors in the model were gender (B = 0.08, t(555) = 4.55, p < 0.001) and use of two or more strategies (B = 0.07, f(555) = 2.82, p = 0.005).
Conclusions:
Reports of greater self-initiated strategy use, and use of a semantic strategy in particular, related to better performance on an associative memory test in older adults. Self-initiated, task-specific strategy use also related to everyday strategy use. The findings extend past work on gender differences to show that women outperform men on an associative memory task but that this is unlikely to be due to self-reported differences in strategy use. The results suggest that self-reported strategy use predicts actual associative memory performance and should be considered in clinical practice.
The purpose of the present study was to study the clinical significance of fluctuations in cognitive impairment status in longitudinal studies of normal aging and dementia. Several prior studies have shown fluctuations in cognition in longitudinal studies is associated with greater risk of conversion to dementia. The present study defines “reverters” as participants who revert between cognitive normality and abnormality according to the Clinical Dementia Rating (CDRTM). A defining feature of the CDR at the Knight Alzheimer’s Disease Research Center (Knight ADRC) at Washington University in St. Louis is that the CDR is calculated by clinicians blinded to cognitive data and any prior assessments so that conclusions are drawn free of circularity and examiner bias. We hypothesized reverters, when compared to cognitively normal participants who remain unimpaired, would have worse cognition, abnormal biomarkers, and would eventually progress to a stable diagnosis of cognitive impairment.
Participants and Methods:
From ongoing studies of aging and dementia at the Knight ADRC, we selected cognitively normal participants with at least three follow-up visits. Participants fell into three categories: stable cognitively normal (“stable CN”), converters to stable dementia (“converters”), and reverters. Cognitive scores at each visit were z-scored for comparison between groups. A subset of participants had fluid biomarker data available including cerebrospinal fluid (CSF) amyloid and phosphorylated-tau species, and plasma neurofilament light chain (NfL). Mixed effect models evaluated group relationships between biomarker status, APOE £4 status, and CDR progression.
Results:
930 participants were included in the study with an average of 5 years of follow-up (Table 1). 661 participants remained cognitively normal throughout their participation while 142 progressed to stable dementia and 127 participants had at least one instance of reversion. Compared to stable CN, reverters had more abnormal biomarkers at baseline, were more likely to carry an APOE £4 allele, and had better cognitive performance at baseline (Table 2, Figure 1). Compared to converters, reverters had less abnormal biomarkers at baseline, were less likely to carry an APOE £4 allele, and had overall better cognitive performance at baseline. In longitudinal analyses, cognitive trajectories of reverters exhibited a larger magnitude of decline compared to stable CNs but the magnitude of decline was not as steep as converters.
Conclusions:
Our results confirm prior studies that showed reversion in cognitive status, when compared to stable cognitive normality, is associated with worse overall genetic, biomarker and cognitive outcomes. Longitudinal analyses demonstrated reverters show significantly more decline than stable participants and a higher likelihood of eventual conversion to a stable dementia diagnosis. Reverters’ cognitive trajectories appear to occupy a transitional phase in disease progression between that of cognitive stability and more rapid and consistent progression to stable dementia. Identifying participants in the preclinical phase of AD who are most likely to convert to symptomatic AD is critical for secondary prevention clinical trials. Our results suggest that examining intraindividual variability in cognitive impairment using unbiased, longitudinal CDR scores may be a good indicator of preclinical AD and predict eventual conversion to symptomatic AD.