To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Rey Osterreith Complex Figure (ROCF) is a neuropsychological task used to measure visual-motor integration, visual memory, and executive functioning (EF) in autistic youth. The ROCF is a valued clinical tool because it provides an insight into the way an individual approaches and organizes complex visual stimuli. The constructs measured by the ROCF such as planning, organization, and working memory are highly relevant for research in, but the standardized procedures for scoring the ROCF can be challenging to implement in large scale clinical trials due to complex and lengthy scoring rubrics. We present preliminary data on an adaptation to an existing scoring system that provides quantifiable scores, can be implemented with reliability, and reduces scoring time.
Participants and Methods:
Data was taken from two large-scale clinical trials focusing on EF in autistic youth. All participants completed the ROCF following standard administration guidelines. The research team reviewed commonly used scoring systems and determined that the Boston Qualitative Scoring System (BQSS) was the best fit due to its strengths in measuring EF, the process-related variables generated, and the available normative data. Initially, the BQSS full scoring system was used, which resulted in comprehensive scores but was not feasible due to the time required (approximately 1-1.5 hours per figure for research assistants to complete scoring). Then, the BQSS short form was used, which was successful at solving the timing problem, but resulted in greater subjectivity in the scores impacting the team’s ability to become reliable. Independent reliability could not be calculated for this version because of the large number of discrepancies among scorers which included 2 neuropsychologists and 4 research assistants. A novel checklist was then developed that combined aspects of both scoring systems to help promote objectivity and reliability. In combination with this checklist the team created weekly check in meetings where challenging figures could be brought to discuss. Independent reliability was calculated amongst all research assistant team members (n=4) for the short form and novel checklist. Reliability was calculated based on (1) if the drawing qualified for being brought to the whole team and (2) individual scores on the checklist.
Results:
Independent reliability was calculated for 10 figures scored utilizing the novel checklist by a team of 4 trained research assistants. All scorers were able to achieve 80% reliability with a high average (80-86%). Study team members reported that scoring took less time taking on average 30-45 minutes per figure.
Conclusions:
Inter-rater reliability was strong on the checklist the study team created, indicating its potential as a useful adaptation to the BQSS scoring system that reduces time demands, making the tool feasible for use in large-scale clinical research studies with initially positive reliability factors. The checklist was easy to use, required little training and could be completed quickly. Future research should continue to examine the reliability of the checklist and the time it takes to complete. Additionally, the ROCF should be studied more broadly in research and examined as a potential outcome measure for large scale research studies.
“Brain fog” is one of the most common consequences of developing COVID-19. The available research focuses mainly on the decline in overall cognitive performance. Much less papers refers to the evaluation of particular cognitive domains, and when it does, it focuses particularly on attention and memory disorders. The available data on the effects of COVID-19 infection on visuospatial functions is sparse, so the aim of this study was to investigate the level of visuospatial functioning in adults who have a history of COVID-19 infection. It was also intended to explore whether there is a protective effect of vaccination on the cognitive functioning after COVID-19.
Participants and Methods:
The group included sixty volunteers (age: M = 40.12, SD = 16.78; education: M = 12.95 SD = 2.25; sex: M = 20, F = 40) - thirty seven with a history of COVID-19 and twenty three who were never infected with SARS-COV-2. Of those with a history of COVID-19, twenty-four were vaccinated at the time of the disease, and thirteen were not. Subjects from individual groups did not differ demographically. Participants were examined with a set of neuropsychological tests to assess: a) general cognitive functioning - Montreal Cognitive Assessment (MoCA), b) attention - d2 Test of Attention, memory - Rey-Osterieth Complex Figure - delayed recall, and c) visuospatial functions - Rey-Osterieth Complex Figure - copy, Block Design - subtest of WAIS-R and three experimental tasks consisting of: incomplete pictures, rotating puzzles, counting cubes in a 3D tower.
Results:
Subjects who had a history of COVID-19 achieved significantly lower scores in the MoCA test (p = 0.033) compared to those who did not suffer from COVID-19. They also needed more time in mental rotation task (p = 0,04). Statistically significant differences were also found in the d2 Test of Attention GP score (p = 0.001). Moreover, in group of adults who had a history of COVID-19, statistically significant differences were found between the vaccinated and unvaccinated subjects. It turned out that those who were vaccinated during their illness performed significantly better than those who were unvaccinated in the following cognitive domains: attention (d2 Test of Attention) and visuospatial functions (Rey-Osterieth Complex Figure test - copy, Block Design from WAIS-R, as well as experimental trials: incomplete pictures, rotating puzzles, counting cubes).
Conclusions:
Among adults who have been infected with COVID-19, there is a decrease in general cognitive performance, but also in individual cognitive abilities, including visuospatial functions. Vaccination significantly reduces the risk of cognitive impairment.
To effectively diagnose and treat cognitive post-COVID-19 symptoms, it is important to understand objective cognitive difficulties across the range of acute COVID-19 severity. The aim of this meta-analysis is to describe objective neuropsychological test performance in individuals with non-severe (mild/moderate) COVID-19 cases in the post-acute stage of infection (>28 days after initial infection).
Participants and Methods:
This meta-analysis was pre-registered with Prospero (CRD42021293124) and utilized the PRISMA reporting guidelines, with screening conducted by at least two independent reviewers for all aspects of the screening and data extraction process. Inclusion criteria were established before the article search and were as follows: (1) Studies using adult participants with a probable or formal and documented diagnosis of COVID-19 in the post-acute stage of infection; (2) Studies comparing cognitive functioning using objective neuropsychological tests in one or more COVID-19 groups and a comparison group, or one group designs using tests with normative data; (3) Asymptomatic, mild, or moderate cases of COVID-19. Twenty-seven articles (n=18,202) with three types of study designs and three articles with additional longitudinal data met our full criteria.
Results:
Individuals with non-severe initial COVID-19 infection demonstrated worse cognitive performance compared to healthy comparison participants (d=-0.412 [95% CI, -0.718, -0.176)], p=0.001). We used metaregression to examine the relationship between both average age of the sample and time since initial COVID-19 infection (as covariates in two independent models) and effect size in studies with comparison groups. There was no significant effect for age (b=-0.027 [95% CI (0.091, 0.038)], p=0.42). There was a significant effect for time since diagnosis, with a small improvement in cognitive performance for every day following initial acute COVID-19 infection (b=0.011 [95% CI (0.0039, 0.0174)], p=0.002). However, those with mild (non-hospitalized) initial COVID-19 infections performed better than did those who were hospitalized for initial COVID-19 infections (d=0.253 [95% CI (0.372, 0.134)], p<0.001). For studies that used normative data comparisons, there was a small, non-significant effect compared to normative data (d=-0.165 [95% CI (-0.333, 0.003)], p=0.055).
Conclusions:
Individuals who have recovered from non-severe cases of COVID-19 may be at risk for cognitive decline or impairment and may benefit from cognitive health interventions.
The purpose of this study was to explore overall recovery time and post-concussive symptoms (PCSS) of pediatric concussion patients who were referred to a specialty concussion clinic after enduring a protracted recovery (>28 days). This included patients who self-deferred care or received management from another provider until recovery became complicated. It was hypothesized that protracted recovery patients, who initiated care within a specialty concussion clinic, would have similar recovery outcomes as typical acute injury concussion patients (i.e., within 3 weeks).
Participants and Methods:
Retrospective data were gathered from electronic medical records of concussion patients aged 6-19 years. Demographic data were examined based on age, gender, race, concussion history, and comorbid psychiatric diagnosis. Concussion injury data included days from injury to initial clinic visit, total visits, PCSS scores, days from injury to recovery, and days from initiating care with a specialty clinic to recovery. All participants were provided standard return-to-learn and return-to-play protocols, aerobic exercise recommendations, behavioral health recommendations, personalized vestibular/ocular motor rehabilitation exercises, and psychoeducation on the expected recovery trajectory of concussion.
Results:
52 patients were included in this exploratory analysis (Mean age 14.6, SD ±2.7; 57.7% female; 55.7% White, 21.2% Black or African American, 21.2% Hispanic). Two percent of our sample did not disclose their race or ethnicity. Prior concussion history was present in 36.5% of patients and 23.1% had a comorbid psychiatric diagnosis. The patient referral distribution included emergency departments (36%), local pediatricians (26%), neurologists (10%), other concussion clinics (4%), and self-referrals (24%).
Given the nature of our specialty concussion clinic sample, the data was not normally distributed and more likely to be skewed by outliers. As such, the median value and interquartile range were used to describe the results. Regarding recovery variables, the median days to clinic from initial injury was 50.0 (IQR=33.5-75.5) days, the median PCSS score at initial visit was 26.0 (IQR=10.0-53.0), and the median overall recovery time was 81.0 (IQR=57.0-143.3) days.
After initiating care within our specialty concussion clinic, the median recovery time was 21.0 (IQR=14.0-58.0) additional days, the median total visits were 2.0 (IQR=2.0-3.0), and the median PCSS score at follow-up visit was 7.0 (IQR=1-17.3).
Conclusions:
Research has shown that early referral to specialty concussion clinics may reduce recovery time and the risk of protracted recovery. Our results extend these findings to suggest that patients with protracted recovery returned to baseline similarly to those with an acute concussion injury after initiating specialty clinic care. This may be due to the vast number of resources within specialty concussion clinics including tailored return-to-learn and return-to-play protocols, rehabilitation recommendations consistent with research, and home exercises that supplement recovery. Future studies should compare outcomes of protracted recovery patients receiving care from a specialty concussion clinic against those who sought other forms of treatment. Further, evaluating the influence of comorbid factors (e.g., psychiatric and/or concussion history) on pediatric concussion recovery trajectories may be useful for future research.
Novel blood-based biomarkers for Alzheimer's disease (AD) could transform AD diagnosis in the community; however, their interpretation in individuals with medical comorbidities is not well understood. Specifically, kidney function has been shown to influence plasma levels of various brain proteins. This study sought to evaluate the effect of one common marker of kidney function (estimated glomerular filtration rate (eGFR)) on the association between various blood-based biomarkers of AD/neurodegeneration (glial fibrillary acidic protein (GFAP), neurofilament light (NfL), amyloid-b42 (Ab42), total tau) and established CSF biomarkers of AD (Ab42/40 ratio, tau, phosphorylated-tau (p-tau)), neuroimaging markers of AD (AD-signature region cortical thickness), and episodic memory performance.
Participants and Methods:
Vanderbilt Memory and Aging Project participants (n=329, 73±7 years, 40% mild cognitive impairment, 41% female) completed fasting venous blood draw, fasting lumbar puncture, 3T brain MRI, and neuropsychological assessment at study entry and at 18-month, 3-year, and 5-year follow-up visits. Plasma GFAP, Ab42, total tau, and NfL were quantified on the Quanterix single molecule array platform. CSF biomarkers for Ab were quantified using Meso Scale Discovery immunoassays and tau and p-tau were quantified using INNOTEST immunoassays. AD-signature region atrophy was calculated by summing bilateral cortical thickness measurements captured on T1-weighted brain MRI from regions shown to distinguish individuals with AD from normal cognition. Episodic memory functioning was measured using a previously developed composite score. Linear mixed-effects regression models related predictors to each outcome adjusting for age, sex, education, race/ethnicity, apolipoprotein E-e4 status, and cognitive status. Models were repeated with a blood-based biomarker x eGFR x time interaction term with follow-up models stratified by chronic kidney disease (CKD) staging (stage 1/no CKD: eGFR>90 mL/min/1.73m2, stage 2: eGFR=60-89 mL/min/1.73m2; stage 3: eGFR=44-59mL/min/1.73m2 (no participants with higher than stage 3)).
Results:
Cross-sectionally, GFAP was associated with all outcomes (p-values<0.005) and NfL was associated with memory and AD-signature region cortical thickness (p-values<0.05). In predictor x eGFR interaction models, GFAP and NfL interacted with eGFR on AD-signature cortical thickness, (p-values<0.004) and Ab42 interacted with eGFR on tau, p-tau, and memory (p-values<0.03). Tau did not interact with eGFR. Stratified models across predictors showed that associations were stronger in individuals with better renal functioning and no significant associations were found in individuals with stage 3 CKD. Longitudinally, higher GFAP and NfL were associated with memory decline (p-values<0.001). In predictor x eGFR x time interaction models, GFAP and NfL interacted with eGFR on p-tau (p-values<0.04). Other models were nonsignificant. Stratified models showed that associations were significant only in individuals with no CKD/stage 1 CKD and were not significant in participants with stage 2 or 3 CKD.
Conclusions:
In this community-based sample of older adults free of dementia, plasma biomarkers of AD/neurodegeneration were associated with AD-related clinical outcomes both cross-sectionally and longitudinally; however, these associations were modified by renal functioning with no associations in individuals with stage 3 CKD. These results highlight the value of blood-based biomarkers in individuals with healthy renal functioning and suggest caution in interpreting these biomarkers in individuals with mild to moderate CKD.
The prevalence of memory complaints in older adults is between 25 and 50%, with poor memory associated with decreased quality of life and declines in daily functioning. Memory training programs are a method for training older adults on strategies and skills to improve memory performance. We conducted a feasibility study of a virtually-delivered adaptation of an Ecologically-Oriented Neurorehabilitation of Memory (EON-Mem) in improving memory for healthy older adults. The primary purposes of this study included: (1) determine the feasibility of conducting EON-Mem virtually with older adults, (2) determine whether a randomized control trial using EON-Mem in older adults is of value, and (3) determine whether electronic delivery of memory training programs with ecological validity is beneficial for older adults.
Participants and Methods:
Twenty-five older adults 55 years of age and older were recruited for participation in a memory training program. All testing and intervention sessions were completed virtually through the Zoom platform. Measures of emotional functioning (Hospital Anxiety and Demographics Scale, health-related quality of life (Short Form-36) and cognitive functioning (Ecological Memory Simulations and Repeatable Battery for Neuropsychological Status; RBANS) were administered before and following the intervention. Participants attended one virtual treatment session per week, with sessions ranging between 60-90 minutes, for a total of six weeks. Between treatment sessions, participants were asked to complete daily homework assignments that allowed them to apply strategies to real-world situations. A priori, feasibility was set at an 80% completion rate and variables that influenced completion are reported.
Results:
To address questions regarding feasibility (e.g., adherence, attrition, etc.), we calculated descriptive statistics (i.e., count statistics, means, standard deviations, and range) on sample information. Of the 25 participants enrolled in the study, 21 participants completed all steps of the study (84% completion rate) showing the delivery format is feasible. The average age of our sample was 61.7 (SD = 5.9) years and average years of education was 17.06 (SD=2.36). Excluding those who dropped, average completion was 72.76 days (SD=18.65, range=47-124). Across all six weeks, homework completion averaged 66.4% (33/49). There were varying effects of the EON-Mem for the EMS memory outcomes with the greatest proportion showing reliable improvement on the ability to recall names (10 participants [42%]). Regarding the RBANS, the greatest proportion of participants showed reliable improvement on the Story Memory task (i.e., four participants [17%]), but only two (9%) showing reliable change on the total Memory Index score.
Conclusions:
Overall, a virtual administration of EON-Mem in older adults was feasible.
Regarding memory changes, the majority of the sample did not demonstrate reliable improvement in memory which might have been due to a variety of reasons including the fact that our sample had a high level of education and low level of memory impairment. Notably, however, this was a feasibility study, not an intervention study. Therefore, future directions should focus on randomized controlled trials to determine efficacy.
Attention plays a key role in auditory processing of information by shifting cognitive resources to focus on incoming stimuli (Riccio, Cohen, Garrison, & Smith, 2005). Mood symptoms are known to affect the efficiency with which this processing occurs, especially when consolidation of memory is required (Massey, Meares, Batchelor, & Bryant, 2015). Without proper focus on relevant task information, improper encoding occurs, resulting in negatively affected performances. This study examines how depression, anxiety, and stress moderate the relationship between auditory attention and verbal list-learning.
Participants and Methods:
Archival data from 373 adults (Mage= 56.46, SD=17.75; Medu = 15.45, SD=2.2; 54% female; 74% white*) were collected at an outpatient clinic. Race was not available in a small percentage of cases included in analyses. Auditory attention was assessed via the Brief Test of Attention (BTA). Learning was assessed via the California Verbal Learning Test (CVLT-II) total T-Score (Trials 15). Mood was assessed via the Depression Anxiety and Stress Scales (DASS-42). A moderation analysis was conducted utilizing the DASS-42 as the moderator between the relationship of BTA and CVLT-II.
Results:
Block 1 of the hierarchical regression was significant in that BTA contributed significantly toward verbal learning on the CVLT-II (F(1, 378)=30.141, p =<.001 , AR2=.074). The standardized beta weight and p-value for BTA were (ß=.272, p<.001). When DASS variables were introduced into Block 2, the model remained significant F(3, 375)=4.227, p =.006 , AR2=.030). The DASS Anxiety subscale had significant beta weights in the model (ß=-.210 p=.004), whereas Depression and Stress were not significant (ß=.039, p=.563) and (ß=.021, p=.765), respectively.
Conclusions:
The current study examined whether mood symptoms affect the relationship between auditory attention and verbal learning. Present results confirm previous research that auditory attention has a significant impact on verbal learning (Massey, Meares, Batchelor, & Bryant, 2015; Weiser, 2004). Building upon prior research, these results indicate that when accounting for auditory attention, clinicians should be aware of possible confounds of anxiety, which may artificially suppress auditory attention. In some circumstances, a differential diagnosis may require consideration that absent anxiety auditory attention may be within normal range. Continued assessment and evaluation regarding the impact of anxiety is crucial for neuropsychologists when examining performances on verbal learning.
Conduct secondary analyses on longitudinal data to determine if caregiver-reported sleep quantity and sleep problems across early childhood (ages 2 - 5 years) predict their child’s attention and executive functioning at age 8 years.
Participants and Methods:
This study utilized data from the Health Outcomes and Measures of the Environment (HOME) Study. The HOME Study recruited pregnant women from 20032006 within a nine-county area surrounding Cincinnati, OH. Caregivers reported on their child’s sleep patterns when children were roughly 2, 2.5, 3, 4, and 5 years of age. Our analysis included 410 participants from the HOME Study where caregivers reported sleep measures on at least 1 occasion or their child completed an assessment of attention and executive functioning at age 8. At each time point, caregiver report on an adapted version of the Child Sleep Habits Questionnaire (CSHQ) was used to determine: (1) total sleep time (TST; “your child’s usual amount of sleep each day, combining nighttime sleep and naps”) and (2) overall sleep problems (23 items related to difficulties with sleep onset, sleep maintenance, and nocturnal events). Our outcome variables, collected at age 8, included caregiver-report forms and measures of attention and executive functioning. Caregiver report measures included normed scores on the Behavior Rating Inventory of Executive Function, from which we focused on the Behavior Regulation Index (BRIEF BRI) and Metacognition Index (BRIEF MI). Performance based measures included T-scores for Omission and Commission errors on the Conner’s Continuous Performance Test, Second Edition (CPT-2) and Standard Scores on the WISC-IV; Working Memory Index (WMI). We used longitudinal growth curve models of early childhood sleep patterns to predict attention and executive functioning at age 8. Predictive analyses were run with and without key covariates: annual household income, child sex and race. To account for general intellectual functioning, we also included covariates children’s WISC-IV Verbal Comprehension and Perceptual Reasoning Indexes.
Results:
Children in our sample were evenly divided by sex; 60% were White. Sleep problems did not show linear or quadratic change over time, so an intercept-only model was used. Sleep problems did not predict any of our outcome measures at age 8 in unadjusted or covariate-adjusted models. As expected, sleep duration was shorter as children matured, so predictive models examined both intercept and slope. Slope was negatively associated with CPT-2 Commissions (unadjusted p=.047; adjusted p=.013); children who showed the least decline in sleep over time had fewer impulsive errors at age 8. The sleep duration intercept was negatively associated with BRIEF BRI (unadjusted p=.002; adjusted p=.043); children who slept less across early childhood had worse parent-reported behavioral regulation at age 8. Neither sleep duration slope nor intercept significantly predicted any other outcomes at age 8 in unadjusted or covariate-adjusted analyses.
Conclusions:
Total sleep time across early childhood predicts behavior regulation difficulties in later childhood. Inadequate sleep during early childhood may be a marker for or contribute to poor development of a child’s self-regulatory skills.
To identify the relative contributions and importance of modifiable fitness and demographic variables to cognitive performance in a cohort of healthy older adults.
Participants and Methods:
Metrics of modifiable fitness (gait speed, respiratory function, grip strength, and body mass index (BMI)) and cognition (executive function, episodic memory, and processing speed) were assessed in 619 older adults from the Health and Retirement Study 2016 wave (mean age = 74.9, sd = 6.9; mean education = 13.4 years, sd = 2.6; 42% female). General linear models were employed to assess the contribution of modifiable fitness variables in predicting three domains of cognition: executive function, episodic memory, and processing speed. Demographics (age, sex, education, time between appointments, and a chronic disease score) were entered as covariates for each model. Relative importance metrics were computed for all variables in each model using Lindeman, Merenda, and Gold (lmg) analysis, a technique which decomposes a given model’s explained variance to describe the average contribution of each predictor variable, independent of its position in the linear model.
Results:
When all variables were entered into the general linear model, demographic and modifiable fitness variables explained 35%, 24%, and 26% of the variance in executive function, episodic memory, and processing speed, respectively (all three models were significant, p <0.001). Age, education, respiratory function, and walking speed had higher relative importance values (all lmgs > 1.8) compared to BMI, grip strength, and other covariates in all three models (all lmgs < 1.3). Gender was also relatively important in the executive function (lmg = 4.2) and episodic memory models (lmg = 5.0). Of the modifiable fitness variables, walking speed and respiratory function had the greatest lmg values (5.8 and 6.4 respectively) in the executive function model, similar to demographic variables age (lmg = 6.0) and education (lmg = 8.9). When demographic variables were entered as covariates, modifiable fitness variables collectively accounted for an additional 9.7%, 6.3%, and 6.0% variance in the executive function, episodic memory, and processing speed models respectively (all three models were significant, p <0.001).
Conclusions:
Our findings indicate that walking speed and respiratory function are of similar importance compared to “traditional” demographic variables such as age and education in predicting cognitive performance in a cohort of healthy older adults. Moreover, modifiable fitness variables accounted for unique variance in executive function, episodic memory, and processing speed after accounting for age and education. Modifiable fitness variables explained the most unique variance in executive function. These results extend the current literature by demonstrating that modifiable fitness variables, even when assessed with brief and relatively coarse measures of physical performance, may be useful in predicting cognitive function. Moreover, the results highlight the need to assess metrics of cognitive reserve, such as education, as well as modifiable fitness variables and their respective roles in accounting for cognitive performance. The data further suggest that relative contributions of physical performance metrics may vary by cognitive domain in healthy older adults.
In research, and particularly clinical trials, it is important to identify persons at high risk for developing Alzheimer’s Disease (AD), such as those with Mild Cognitive Impairment (MCI). However, not all persons with this diagnosis have a high risk of AD as MCI can be broken down further into amnestic MCI (aMCI), who have a high risk specifically for AD, and non-amnestic MCI (naMCI), who are predominantly at risk for other dementias. People with aMCI largely differ from healthy controls and naMCI on memory tasks as it is the hallmark criteria for an amnestic diagnosis. Given the growing use of the NIH Toolbox Cognition battery in research trials, this project investigated which Toolbox Cognition measures best differentiated aMCI from naMCI and in comparison to persons with normal cognition.
Participants and Methods:
A retrospective data analysis was conducted investigating performance on NIH Toolbox Cognition tasks among 199 participants enrolled in the Michigan Alzheimer’s Disease Research Center. All participants were over age 50 (51-89 years, M=70.64) and had a diagnosis of aMCI (N=74), naMCI (N=24), or Normal Cognition (N=101). Potential demographic differences were investigated using chi-square and ANOVAs. Repeated measure general linear model was used to look at potential group differences in Toolbox Cognition performance, covarying for age which was statistically different in aMCI versus Normal participants. Linear regression was used to determine which cognitive abilities, as measured by the Uniform Data Set-3 (UDS3), might contribute to Toolbox differences noted in naMCI versus aMCI groups.
Results:
As expected, aMCI had lower Toolbox memory scores compared to naMCI (p=0.007) and Normals (p<0.001). Interestingly, naMCI had lower Oral Reading scores than both aMCI (p=0.008) and Normals (p<0.001). There were no other Toolbox performance differences between the MCI groups. 19.4% of the variance in Oral Reading scores was explained by performance on the following UDS3 measures: Benson delayed recall (inverse relationship) and backward digit span and phonemic fluency (positive relationship).
Conclusions:
In this study, Toolbox Picture Sequence Memory and Oral Reading scores differentiated aMCI and naMCI groups. While the difference in memory was expected, it was surprising that the naMCI group performed worse than the aMCI and normal groups on the Toolbox Oral Reading task, a task presumed to reflect Crystalized abilities resistive to cognitive decline. Results suggest that Oral Reading is primarily positively associated with working memory and executive tasks from the UDS3, but negatively associated with visual memory. It is possible that the Oral Reading subtest is sensitive to domains of deficit aside from memory that can best distinguish aMCI from naMCI. A better understanding of the underlying features in the Oral Reading task will assist in better characterizing deficit patterns seen in naMCI, making selection of aMCI participants more effective in clinical trials.
The Latinx population is rapidly aging and growing in the US and is at increased risk for stroke and dementia. We examined whether bilingualism confers cognitive resilience following stroke in a community-based sample of Mexican American (MA) older adults.
Participants and Methods:
Participants included predominantly urban, non-immigrant MAs aged 65+ from the Brain Attack Surveillance in Corpus Christi- Cognitive study. Participants were recruited using a two-stage area probability sample with door-to-door recruitment until the onset of the COVID-19 pandemic; sampling and recruitment were then completed via telephone. Cognition was assessed with the Montreal Cognitive Assessment (MoCA; 30-item in-person, 22-item via telephone) in English or Spanish. Bilingualism was assessed via a questionnaire and degree of bilingualism was calculated (range 0%-100% bilingual). Stroke history was collected via self-report. We harmonized the 22-item to the 30-item MoCA using published equipercentile equating. We conducted a series of regressions with the harmonized MoCA score as the dependent variable, stroke history and degree of bilingualism as independent variables, and age, sex/gender, education, assessment language, assessment mode (in-person vs. phone), and self-reported vascular risk factors (hypertension, diabetes, heart disease) as covariates. We included a stroke history by bilingualism interaction to examine whether bilingualism modifies the association between stroke history and MoCA performance.
Results:
Participants included 841 MA older adults (59% women; age M(SE) = 73.5(0.2); 44% less than high school education). Most (77%) of the sample completed the MoCA in English. 93 of 841 participants reported a history of stroke. In an unadjusted model, degree of bilingualism (b = 3.41, p < .0001) and stroke history (b = -1.98, p = .003) were associated with MoCA performance. In a fully adjusted model, stroke history (b = -1.79, p = .0007) but not bilingualism (b = 0.78, p = .21) was associated with MoCA performance. When an interaction term was added to the fully adjusted model, the interaction between stroke history and bilingualism was not significant (b= -0.47, p = .78).
Conclusions:
Degree of bilingualism does not modify the association between stroke history and MoCA performance in Mexican American older adults. These results should be replicated in samples of validated strokes, more comprehensive bilingualism and cognitive assessments, and in other bilingual populations.
Research has shown significant deficits in cognitive domains and a decline in activities of daily living (ADL) in patients with Alzheimer disease (AD). Patients with Mild Cognitive Impairment (MCI) also experience struggles with ADL; moreover, research documents that many MCI patients' symptoms gradually worsen such that their diagnosis eventually converts to AD. Different cognitive domains (i.e., memory, executive function, attention etc.) impact ADL performance. Commonly used instruments for assessing ADL are subjective measures filled by primary caregivers. Subjective measures are not able to assess actual ADL performance. Thus, performance-based tests, such as the Direct Assessment of Functional Status (DAFS), tests of ADLs are more informative. The purpose of this study is to analyze classification accuracy rates for AD and MCI patients with use of five ADL subscales and overall performance a performance-based ADL test.
Participants and Methods:
As part of a larger study, 61 patients diagnosed with AD and 54 age- and education matched patients diagnosed with MCI were administered the DAFS. All patients were administered the Direct Assessment of Functional Status test. This test assesses orientation to time, communication skills, knowledge of transportation rules, financial abilities, and ability to shop for groceries, as well as basic daily skills such as grooming and eating skills. For the purpose of this study, grooming and eating abilities were not used in the analysis.
Results:
Discriminant functional analysis was performed to assess the classification accuracy rates for AD and MCI patients using their ability to perform various types of ADL tasks on the DAFS. The analysis revealed total DAFS scores and all five subscales significantly classified AD and MCI patients performance (all p values < .01). While performance across the DAFS subscale scores accurately classified MCI at rates ranging from 67% - 90%, the rates of accurate classification was much lower for AD patients (29.5% - 62.3%). Of the subscales, the DAFS Shopping task best discriminated and classified the performance of AD at 62% and MCI at 67%.
Conclusions:
These results indicates that a performance-based ADL test can aid in classification of AD and MCI. The fact that the DAFS shopping subscale which requires learning and memory abilities had the best accuracy rates, is consistent with profound memory deficits found in AD patients. This study further highlights the importance of using observational-based measures to assess ADL in MCI and AD patients.
Treatment for pediatric brain tumors (PBTs) is associated with neurocognitive risk, including declines in IQ, executive function, and visual motor processing. Low grade tumors require less intensive treatment (i.e., focal radiotherapy (RT) or surgical resection alone), and have been associated with more favorable cognitive outcomes. However, these patients remain at risk of cognitive problems, which may present differently depending on tumor location. Executive functioning (EF), in particular, has been broadly associated with both frontal-subcortical networks (supratentorial) and the cerebellum (infratentorial). The current study examined intellectual functioning, executive functioning (set-shifting and inhibition), and visual motor skills in patients who were treated for low-grade tumors located in either the supratentorial or infratentorial region.
Participants and Methods:
Participants were survivors (age 8-18) previously treated with focal proton RT or surgery alone for infratentorial (n=21) or supratentorial (n=34) low grade glioma (83.6%) or low grade glioneuronal tumors (16.4%). Survivors >2.5 years post-treatment completed cognitive testing (WISC-IV/WAIS-IV; D-KEFS Verbal Fluency (VF), Color-Word Interference (CW), Trail Making Test (TM); Beery Visual-Motor Integration). We compared outcomes between infratentorial and supratentorial groups using analysis of covariance (ANCOVA). Demographic and clinical variables were compared using Welch’s t-tests. ANCOVAs were adjusted for age at evaluation, age at treatment, and history of posterior fossa syndrome due to significant or marginally significant differences between groups.
Results:
Tumor groups did not significantly differ with respect to sex (49.0% male), length of follow-up (M 4.4 years), or treatment type (74.5% surgery alone, 25.5% proton RT). Marginally significant group differences were found for age at evaluation (infratentorial M = 12.4y, supratentorial M = 14.1y, p = .054) and age at treatment (infratentorial M = 7.9y, supratentorial M = 9.7y, p =.074). Posterior fossa syndrome only occurred with infratentorial tumors (n=5, p = .003). Adjusting for covariates, the supratentorial group exhibited significantly superior performance on a measure of inhibition and set-shifting (CW Switching Time (t(32) = -2.05, p=.048, n2 =.11). There was a marginal group difference in the same direction on CW Inhibition Time (t(32 = -1.77, p = .086, n2 =.08). On the other hand, the supratentorial group showed significantly lower working memory than the infratentorial group (t(50) = 2.45, p = .018, n2 = .11), and trends toward lower verbal reasoning (t(50)=1.96, p = .056, n2 = .07) and full-scale IQ (t(50)=1.73, p = .090, n2 = .055). No other group differences were identified across intellectual, EF, and visualmotor measures.
Conclusions:
Infratentorial tumor location was associated with weaker switching and inhibition performance, while supratentorial tumor location was associated with lower performance on intellectual measures, particularly working memory. These findings suggest that even with relatively conservative treatment (i.e., focal proton RT or surgery alone), there remains neurocognitive risk in children treated for low-grade brain tumors. Moreover, tumor location may predict distinct patterns of long-term neurocognitive outcomes, depending on which brain networks are involved.
People living with younger onset neurocognitive disorders (YOND) experience significant delays in receiving an accurate diagnosis. Although neuropsychological assessment can help assist in a timely diagnosis of YOND, several barriers limit the accessibility of these services. Utilising teleneuropsychology may assist with the service access gap. This study aimed to investigate whether similar results were found on neuropsychological tests administered using videoconference and in person in a sample of people living with YOND.
Participants and Methods:
Participants with a diagnosis of YOND were recruited from the Royal Melbourne Hospital (RMH) Neuropsychiatry inpatient ward and outpatient clinic, and through community advertising. A randomised counterbalanced cross-over design was used where participants completed 14 tests, across two administration sessions: one in person and one using videoconference. There was a two-week interim between the administration sessions. The videoconference sessions were set up across two laptops using the Healthdirect Video Call platform and Q-Global. Repeated measures t-tests, intraclass correlation coefficients (ICC) and Bland-Altman plots were calculated to compare results across the test administration sessions.
Results:
Thirty participants (Mage = 60.23, SD = 7.05) completed both sessions. Huntington's disease was the most common YOND diagnosis (n = 8), followed by Alzheimer's disease (n = 6), mild cognitive impairment (n = 6) and frontotemporal dementia (n = 4). Preliminary results from the current study indicate no statistically significant differences, and small effect sizes, between the in-person or videoconference sessions. ICC estimates range from .69 to .97 across neuropsychological tests.
Conclusions:
This study provides preliminary evidence that performances are comparable between in-person and videoconferencemediated assessments for most neuropsychological tasks evaluated in people living with YOND. Should further research confirm these preliminary results, findings will support the provision of teleneuropsychology to address the current service gaps experienced by people with YOND.
Children with epilepsy are at greater risk of lower academic achievement than their typically developing peers (Reilly and Neville, 2015). Demographic, social, and neuropsychological factors, such as executive functioning (EF), mediate this relation. While research emphasizes the importance of EF skills for academic achievement among typically developing children (e.g., Best et al., 2011; Spiegel et al., 2021) less is known among children with epilepsy (Ng et al., 2020). The purpose of this study is to examine the influence of EF skills on academic achievement in a nationwide sample of children with epilepsy.
Participants and Methods:
Participants included 427 children with epilepsy (52% male; MAge= 10.71), enrolled in the Pediatric Epilepsy Research Consortium (PERC) Epilepsy Surgery Database who had been referred for surgery and underwent neuropsychological testing. Academic achievement was assessed by performance measures (word reading, reading comprehension, spelling, and calculation and word-based mathematics) and parent-rating measures (Adaptive Behavior Assessment System (ABAS) Functional Academics and Child Behavior Checklist (CBCL) School Performance). EF was assessed by verbal fluency measures, sequencing, and planning measures from the Delis Kaplan Executive Function System (DKEFS), NEPSY, and Tower of London test. Rating-based measures of EF included the 'Attention Problems’ subscale from the CBCL and 'Cognitive Regulation’ index from the Behavior Rating Inventory of Executive Function (BRIEF-2). Partial correlations assessed associations between EF predictors and academic achievement, controlling for fullscale IQ (FSIQ; A composite across intelligence tests). Significant predictors of each academic skill or rating were entered into a two-step regression that included FSIQ, demographics, and seizure variables (age of onset, current medications) in the first step with EF predictors in the second step.
Results:
Although zero-order correlations were significant between EF predictors and academic achievement (.29 < r’s < .63 for performance; -.63 < r’s < -.50 for rating measures), partial correlations controlling for FSIQ showed fewer significant relations. For performance-based EF, only letter fluency (DKEFS Letter Fluency) and cognitive flexibility (DKEFS Trails Condition 4) demonstrated significant associations with performance-based academic achievement (r’s > .29). Regression models for performance-based academic achievement indicated that letter fluency (ß = .22, p = .017) and CBCL attention problems (ß = -.21, p =.002) were significant predictors of sight-word reading. Only letter fluency (ß = .23, p =.006) was significant for math calculation. CBCL Attention Problems were a significant predictor of spelling performance (ß = -.21, p = .009) and reading comprehension (ß = -.18, p =.039). CBCL Attention Problems (ß = -.38, p <.001 for ABAS; ß = -.34, p =.002 for CBCL School) and BRIEF-2 Cognitive Regulation difficulties (ß = -.46, p < .001 for ABAS; ß = -.46, p =.013 for CBCL School) were significant predictors of parent-rated ABAS Functional Academics and CBCL School Performance.
Conclusions:
Among a national pediatric epilepsy dataset, performance-based and ratings-based measures of EF predicted performance academic achievement, whereas only ratings-based EF predicted parent-rated academic achievement, due at least in part to shared method variance. These findings suggest that interventions that increase cognitive regulation, reduce symptoms of attention dysfunction, and promote self-generative, flexible thinking, may promote academic achievement among children with epilepsy.
This longitudinal study investigates whether reading strategies are influenced by the orthographic depth of languages, specifically Spanish or Cantonese, acquired through enrollment in bilingual immersion programs. Spanish shares an alphabet with English and is considered a phonologically transparent language (Sun et al., 2022). Research has shown that second language learners of Cantonese, an opaque language, performed better on orthographic awareness tasks that involve whole-word visual information processing (Wang and Geva, 2003). We hypothesize that students enrolled in a bilingual immersion program will outperform peers in general education (GENED) on selected reading tasks. More specifically, those in Spanish-immersion programs will perform better on English tasks involving phonological processing; whereas those in Cantonese-immersion programs will perform better on single-word/character processing tasks.
Participants and Methods:
Participants (n=102) were native English speakers recruited from the San Francisco Unified School District. Our sample included 42 females and 60 males. Thirty-nine identified as White, 33 Mixed Race, 25 Asian, 4 Latinx, and 1 Black. Thirty-nine children were in GENED, 33 in Spanish immersion programs (Sp), and 30 in Cantonese immersion programs (Cn). Each child was assessed on a core language/behavioral battery at Kindergarten (T1) and 2nd-3rd grade (T2). Time 2 participants were between 7 and 9 years old.
Those that scored at least one standard deviation below the mean (SS=85) on a nonverbal intelligence screener (KBIT-2 Matrices) were excluded to mitigate confounds of intellectual disabilities. Groups' performance in English was compared on English tasks involving phonological processing (CTOPP-2 Blending Words and Elision) and single-word/character information processing tasks (WJ-IV Letter Word Identification and KABC-II Rebus).
Results:
Simple main effects analysis showed that time did have a statistically significant effect on test performance (p <0.001). At T2, analysis revealed a significant impact of school enrollment on Blending Words [F (2, 51.0) = 4.19, p = 0.018]. As predicted, post-hoc analysis revealed the students enrolled in the Spanish-immersion program significantly outperformed those in general education on this task. Across the other three tasks, those enrolled in Spanish and Cantonese immersion programs performed as strong as or better than those in GENED, but the variability was not statistically significant.
Conclusions:
This study uniquely isolated the effects of bilingual education without confounding factors of access to resources of a more heterogeneous socioeconomic sample. Mixed results partially supported our hypotheses: Spanish-immersion participants performed significantly better than those in GENED on one English phonological processing task (Blending Words). Although Cantonese immersion students had a higher mean performance than those in GENED on single-word/-character processing tasks, the variance was not statistically significant. This implies that bilingual education may offer advantages in either reading strategy. According to the literature, characteristics of a language may influence literacy acquisition; thus, subsequent research may continue to examine the effect of learning multiple languages with varying levels of orthographic depth on the development of English reading strategies.
The brain is reliant on mitochondria to carry out a host of vital cellular functions (e.g., energy metabolism, respiration, apoptosis) to maintain neuronal integrity. Clinically relevant, dysfunctional mitochondria have been implicated as central to the pathogenesis of Alzheimer’s disease (AD). Phosphorous magnetic resonance spectroscopy (31p MRS) is a non-invasive and powerful method for examining in vivo mitochondrial function via high energy phosphates and phospholipid metabolism ratios. At least one prior 31p MRS study found temporal-frontal differences for high energy phosphates in persons with mild AD. The goal of the current study was to examine regional (i.e., frontal, temporal) 31p MRS ratios of mitochondrial function in a sample of older adults at-risk for AD. Given the high energy consumption in temporal lobes (i.e., hippocampus) and preferential age-related changes in frontal structure-function, we predicted 31p MRS ratios of mitochondrial function would be greater in temporal as compared to frontal regions.
Participants and Methods:
The current study leveraged baseline neuroimaging data from an ongoing multisite study at the University of Florida and University of Arizona. Participants were older adults with memory complaints and a first-degree family history of AD [N = 70; mean [M] age [years] = 70.9, standard deviation [SD] =5.1; M education [years] = 16.2, SD = 2.2; M MoCA = 26.5, SD = 2.4; 61.4% female; 91.5% non-latinx white]. To achieve optimal sensitivity, we used a single voxel method to examine 31p MRS ratios (bilateral prefrontal and left temporal). Mitochondrial function was estimated by computing 5 ratios for each voxel: summed adenosine triphosphate to total pooled phosphorous (ATP/TP; momentary energy), ATP to inorganic phosphate (ATP/Pi; energy consumption), phosphocreatine to ATP (PCr/ATP; energy reserve), phosphocreatine to inorganic phosphate (PCr/Pi; oxidative phosphorylation), and phosphomonoesters to phosphodiesters (PME/PDE; cellular membrane turnover rate). All ratios were corrected for voxel size and cerebrospinal fluid fraction. Separate repeated measures analyses of variance controlling for scanner site differences (RM ANCOVAs) were performed.
Results:
31p MRS ratios were unrelated to demographic characteristics and were not included as additional covariates in analyses. Results of separate RM ANCOVAs revealed all 31p MRS ratios of mitochondrial function were greater in left temporal relative to bilateral prefrontal voxel: ATP/TP (p < .001), ATP/Pi (p = .001), PCr/ATP (p = .004), PCr/Pi (p = .004), and PME/PDE (p = .017). Effect sizes (partial eta squared) ranged from 0.6-.20.
Conclusions:
Consistent and extending one prior study, all 31p MRS ratios of mitochondrial function were greater in temporal as compared to frontal regions in older adults at-risk for AD. This may in part be related to the intrinsically high metabolic rate of the temporal region and preferential age-related changes in frontal structure-function. Alternatively, findings may reflect the influence of unaccounted factors (e.g., hemodynamics, auditory stimulation). Longitudinal study designs may inform whether patterns of mitochondrial function across different brain regions are present early in development, occur across the lifespan, or some combination. In turn, this may inform future studies examining differences in mitochondrial function (as measured using 31p MRS) in AD.
Higher education is strongly associated with better cognitive function in older adults. Previous research has also showed that positive psychosocial factors, such as selfefficacy and emotional and instrumental support, are beneficial for late-life cognition. There is prior evidence of a buffering effect of self-efficacy on the relationship between educational disadvantage and poor cognition in older adults, however it is not known if other psychosocial factors modify the schooling-cognition relationship. We hypothesized that higher levels of emotional and instrumental support will diminish the association between lower education and lower cognitive test scores among older adults.
Participants and Methods:
553 older adults without dementia (42.1% non-Latinx Black, 32.2% non-Latinx White, 25.7% Latinx; 63.2% women; average age 74.4 (SD 4.3)) from the Washington Heights-Inwood Columbia Aging Project. Neuropsychological tests assessed four cognitive domains (language, memory, psychomotor processing speed, and visuospatial function). Self-reported emotional and instrumental support were assessed with measures from NIH Toolbox. Linear regression estimated interactions between education and the two support measures on cognition in models stratified by cognitive domain and racial and ethnic group. Covariates included age, sex/gender, and chronic health conditions (e.g. heart disease, stroke, cancer, etc.).
Results:
Education was associated with cognition across racial and ethnic groups. For every one year of schooling, the processing speed z-score composite was 0.33 higher among Latinx participants, 0.10 among non-Latinx Black participants, and 0.03 higher among non-Latinx White participants. The education-cognition relationship was generally similar across cognitive domains with larger effects in non-Latinx Black and Latinx participants than in White participants. Low education was associated with slower processing speed among Black participants with low emotional support (B = 0.224, 95% CI [0.014, 0.096]), but there was no association between low education and processing speed among Black older adults with high levels of emotional support (beta for interaction = -.142, 95% CI [-0.061, -0.001]). A similar pattern of results was observed for instrumental support (beta for interaction = -.207, 95% CI [-0.064, 0.010]). There were no interactions between support and education on other cognitive domains or among Latinx and White participants.
Conclusions:
We found that higher levels of emotional and instrumental support attenuate the detrimental effect of educational disadvantages on processing speed in older Black adults. This may occur via benefits of social capital, which provides access to health resources and knowledge, increased social interaction, an emotional outlet allowing the ability to better cope with stress. Longitudinal analyses are needed to examine temporal patterns of associations. In addition, improving equitable access to high quality schools will improve later-life cognitive outcomes for future generations of older adults. However, for the growing number of Black older adults who will not experience the benefits of structural improvements in the education system, emotional and instrumental support may represent a modifiable psychosocial factor to reduce their disproportionate burden of cognitive morbidity.
Cognitive function may underlie the use of more adaptive as compared to maladaptive coping strategies to manage pandemic-related stress in older adults. As the composition of coping strategies varies with context, we investigated the factor structure of 14 established coping strategies. We then aimed to determine whether specific coping strategies were associated with cognitive function.
Participants and Methods:
141 adults aged 50-90 years old completed the study via Zoom. The National Alzheimer’s Coordinating Center TCog battery assessed cognitive function. The Brief Cope, adapted to evaluate COVID-19, measured 14 specific coping strategies.
Results:
Based on our factor analyses, Avoidant (e.g., denial and substance use) and Approach (e.g., planning, instrumental and emotional support systems) coping composite scores were formed. Regression analyses, adjusted for age and education, indicated that 12.9% of the variance in the use of Avoidance coping strategies was explained by worse performance on measures of episodic memory, executive attention/processing speed, working memory, and verbal fluency. A closer examination indicated that verbal fluency was not a statistically significant contributor to the model. 9.1% of the variance in Approach coping strategies was related to cognitive function with working memory and verbal fluency being statistically significant contributors to the model.
Conclusions:
Older adults with better performance on higher-order cognitive testing may utilize more effective coping strategies in older adults. These results have implications for attenuating pandemic-related stress and warrant developing brief interventions to help facilitate problem-solving and reduce emotional distress in those with lower cognitive resources.
Mind-wandering—the spontaneous shift in attention away from the external task to internal thoughts (including daydreaming, fantasizing, rumination, and worrying)—is negatively associated with performance across a variety of tasks including the sustained attention to response task, the Stroop task, tasks of working memory, choice reaction time, visual search, as well as more ecologically related tasks like reading comprehension and mathematics. There has also been promising evidence suggesting a potential link between mind-wandering, functional connectivity of the canonical networks of the brain, and Alzheimer’s disease (AD). However, no study has directly examined the relationship between neural correlates of mind-wandering and AD pathogenesis. In prior work, our lab has identified a whole-brain, functional connectivity-based marker of mind-wandering—the mwCPM—which predicted response time variability in older adults. In this study, we sought to evaluate the ability of this mind wandering CPM, derived from response time variability, to predict CSF p-tau/Aß42 ratio in 289 older adults from the Alzheimer’s Disease Neuroimaging Initiative. We hypothesized that the combined mind-wandering model including functional connections that predict high mind-wandering and functional edges that predict stability in attention, would predict AD pathology.
Participants and Methods:
Resting-state functional MRI data from 289 older adults (147 healthy older adults, 111 individuals with mild cognitive impairment, and 31 older adults with AD) from the Alzheimer’s Disease NeuroImaging Initiative was analyzed for the current study. Participants were only included in the analyses if they had resting-state fMRI data, CSF measures of amyloid beta and tau pathology, and performance on cognitive composites of global cognition, episodic memory, and executive functioning. Using the well-established methodology of connectome-based predictive modeling, the mind-wandering model was applied to the resting-state fMRI data to predict CSF-based biomarker levels of p-tau and Aß42. Moreover, we also examined if this mind-wandering model predicted individual differences in composite measures of global cognition, episodic memory, and executive functioning
Results:
The high mwCPM model successfully predicted measured CSF p-tau/Aß ratios (high model: p = .137, p = .0196), controlling for mean framewise displacement. However, the combined network and the low MW network were not significant (combined model: p = .0731, p = .216; low model: p = -.0027, p = .960). We next examined the association between connectivity strengths of the high mwCPM and cognitive functioning in the domains of general cognition, episodic memory, and executive functioning. Connectivity strength in the high mwCPM—functional edges that were associated with high behavioral variability—were negatively associated with all three cognitive composites (global cognition: r = -.239, p < .0001; episodic memory: r = -.208, p = < .0001; executive functioning: r = -.178, p < .0001).
Conclusions:
This study provides the first empirical support for a link between a neuromarker of mind-wandering and AD pathophysiology. Moreover, mind-wandering also has downstream consequential effects for key domains of cognitive functioning in older adults. Interventions targeted at reducing mind-wandering, particularly before the onset of AD pathogenesis, may make a significant contribution to the prevention of AD-related cognitive decline.