To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Obesity is associated with adverse effects on brain health, including increased risk for neurodegenerative diseases. Changes in cerebral metabolism may underlie or precede structural and functional brain changes. While bariatric surgery is known to be effective in inducing weight loss and improving obesity-related medical comorbidities, few studies have examined whether it may be able to improve brain metabolism. In the present study, we examined change in cerebral metabolite concentrations in participants with obesity who underwent bariatric surgery.
Participants and Methods:
35 patients with obesity (BMI > 35 kg/m2) were recruited from a bariatric surgery candidate nutrition class. They completed single voxel 1H-proton magnetic resonance spectroscopy at baseline (pre-surgery) and within one year post-surgery. Spectra were obtained from a large medial frontal brain region. Tissue-corrected absolute concentrations for metabolites including choline-containing compounds (Cho), myo-inositol (mI), N-acetylaspartate (NAA), creatine (Cr), and glutamate and glutamine (Glx) were determined using Osprey. Paired t-tests were used to examine within-subject change in metabolite concentrations, and correlations were used to relate these changes to other health-related outcomes, including weight loss and glycemic control.
Results:
Bariatric surgery was associated with a reduction in cerebral Cho (f[34j = -3.79, p < 0.001, d = -0.64) and mI (f[34] = -2.81, p < 0.01, d = -0.47) concentrations. There were no significant changes in NAA, Glx, or Cr concentrations. Reductions in Cho were associated with greater weight loss (r = 0.40, p < 0.05), and reductions in mI were associated with greater reductions in HbA1c (r = 0.44, p < 0.05).
Conclusions:
Participants who underwent bariatric surgery exhibited reductions in cerebral Cho and mI concentrations, which were associated with improvements in weight loss and glycemic control. Given that elevated levels of Cho and mI have been implicated in neuroinflammation, reduction in these metabolites after bariatric surgery may reflect amelioration of obesity-related neuroinflammatory processes. As such, our results provide evidence that bariatric surgery may improve brain health and metabolism in individuals with obesity.
This study evaluated the relation between five-factor model (FFM) personality traits and intra-individual variability (IIV) in executive functioning (EF) using both subjective self-report and objectives measures of EF.
Participants and Methods:
165 university participants (M=19 years old, SD=1.3; 55.2% White, 35.2% African American, 72.7% female) completed the Barkley Deficits in Executive Functioning Scale-Long Form (BDEFS), IPIP-NEO Personality Inventory, Trail-Making Test (TMT) Parts A and B, and the Neuropsychological Assessment Battery (NAB) EF module. A participant’s IIV was calculated as the standard deviation around their own mean performance. Objective EF IIV was computed from T-scores for performance on Trails A, Trails B, and the NAB EF module. Subjective EF IIV was computed from T-scores for performance across BDEFS domains.
Results:
Pearson r correlations were used to evaluate the relation between subjective and objective IIV and FFM traits of personality. Subjective EF IIV was positively correlated with FFM neuroticism [r=.48; p<.001] and negatively correlated with FFM conscientiousness [r=-.43; p<.001], extraversion [r=-.18; p=.023] and agreeableness [r=-.22; p=.004]. There were no significant associations between FFM traits and objective EF IIV performance. There was additionally no significant relation between subjective EF IIV performance and objective EF IIV.
Conclusions:
Personality traits were associated with individual variability on a self-reported measure of EF but not on performance-based EF measures. These results suggest that IIV for the BDEFS was influenced by personality traits, particularly neuroticism and conscientiousness, and may reflect method variance. It was notable that IIV was not correlated between subjective and objective EF measures.
While attention-deficit/hyperactivity disorder (ADHD) symptoms, including inattention, hyperactivity, and impulsivity, are normally distributed within the population, features of ADHD have been associated with poor functional outcomes across various domains of life, such as academic achievement and occupational status. However, some individuals with even strong ADHD features show normal or above-average success within these functional domains. Executive dysfunction and emotion regulation abilities are associated with educational attainment and occupational status and may therefore explain some of the heterogeneity in functional outcomes in individuals with mild, moderate, and high levels of ADHD symptoms. In this study, we investigated whether emotion regulation strategy use (i.e., emotion suppression or cognitive reappraisal) and executive function abilities moderate the relationship between ADHD symptoms and occupational status and education attainment in adults.
Participants and Methods:
Data were collected from 109 adults aged 18 - 85 (M = 38.08, SD = 15.54; 70.6% female) from the Nathan Kline Institute Rockland Sample. All participants completed measures of ADHD symptoms (Conners Adult ADHD Rating Scale), emotion regulation strategy use (Emotion Regulation Questionnaire), and executive functioning (composite scores of inhibition, shifting and fluency from the standardized Delis-Kaplan Executive Function System). In this study, executive function abilities and emotion regulation strategy use were tested as potential moderators of the relationship between ADHD symptoms and functional outcomes using hierarchical regression models.
Results:
Several two- and three-way interactions predicting occupational status and educational attainment were observed. Education attainment was predicted by hyperactivity and reappraisal (ß = -0.26, p = .006); inattention, shifting, and reappraisal (ß = -0.52, p = .029); inattention, shifting, and suppression (ß = -0.40, p = .049); inattention, fluency, and reappraisal (ß = 0.24, p = .038); hyperactivity, fluency, and reappraisal (ß = 0.27, p = .034); and impulsivity, fluency, and reappraisal (ß = 0.44, p = .004). Occupational status was predicted by inattention and reappraisal, (ß = -0.27, p = .032), hyperactivity and reappraisal (ß = -0.26, p = .004); and impulsivity, fluency, and reappraisal (ß = 0.35, p = .031). Fluency was positively associated with educational attainment when controlling for inattention and impulsivity.
Conclusions:
Consistent with the hypothesis, the association between ADHD symptoms and both occupational status and educational attainment were moderated by the interaction between emotion regulation strategy use, executive function abilities domains. The observed interactions suggest that both occupational status and educational attainment may depend heavily on one’s intrinsic abilities and traits. Contrary to previous literature, we found no evidence that ADHD symptoms, emotional regulation strategies were independently associated with either educational attainment or occupational status, but this should be validated in a sample with greater representation of adults with clinically significant ADHD.
Mild cognitive impairment (MCI) in Parkinson’s disease (PD) is a critical state to consider. In fact, PD patients with MCI are more likely to develop dementia than the general population. Thus, identifying the risk factors for developing MCI in patients with PD could help with disease prevention. We aim to use the Cox regression model to identify the variables involved in the development of MCI in healthy controls (HC) and in a PD cohort.
Participants and Methods:
The Parkinson’s Progressive Markers Initiative (PPMI) database was used to analyze data from 166 HC and 365 patients with PD. They were analyzed longitudinally, at baseline and at 3-year follow up. Both HC and PD were further divided in 2 groups based on the presence or absence of MCI. Conversion to MCI was defined as the first detection of MCI. For all participants, we extracted the (1) Neuropsychiatric symptoms (anxiety, impulsive-compulsive disorders and sleep impairment), (2) 3T MRI-based data (cortical and subcortical brain volumes based on the Desikan atlas, using FreeSurfer 7.1.1) and (3) genetic markers (MAPT and APOE £4 genes). We used Python 3.9 to perform three Cox proportional hazard models (PD-HC, HC only and PD only) and to model the risk of conversion to MCI, attributable to neuropsychiatric symptoms and cortical brain parameters. We included as covariates: age, sex, education, and disease duration (for the PD group). Hazard ratios (HRs) along with their 95% confidence intervals (CIs) are reported.
Results:
When including both HC and PD in the model, Cox regression analyses showed that age of onset, diagnosis, the State-Trait Anxiety Inventory (STAI) and sleep impairment are variables that are associated with a greater risk of conversion to MCI (p<.005). For HC, only the STAI and the genetic marker MAPT were significantly associated with a risk of cognitive decline (p<.05). These results further indicated that a greater anxiety score at the STAI leads to a greater chance of developing a MCI whereas being a carrier of the MAPT gene reduces the risk of MCI. Regarding analysis on PD, results revealed that the STAI and the cortical volumes of the frontal dorsolateral and temporal regions are involved with a greater risk of developing a MCI (p<.05).
Conclusions:
These analyses show that the neuropsychiatric symptom of anxiety seem to play an important role in the development of a MCI (significant in all three analyses). For patients with PD, cortical volumes of the frontal dorsolateral and temporal regions are significantly related to risk of MCI. This study highlights the importance of considering neuropsychiatric symptoms as well as cerebral volumes as key factors in the development of MCI in PD.
Primary progressive aphasia (PPA) is a dementia syndrome characterized by initial development of progressive language deficits in the absence of impairment in other cognitive domains. It has historically been difficult to assess the presence or nature of true memory deficits in this population due to interference from language disturbance on task performance. The Three Words Three Shapes test (3W3S) is a relatively easy memory task that evaluates both verbal and nonverbal memory within the same modality and assesses different aspects of memory, including incidental encoding, effortful encoding, delayed recall, and recognition. Persons with PPA show a material-specific dissociation in performance on 3W3S; specifically, deficits in incidental encoding and recall are limited to verbal, not nonverbal material, in PPA, with preserved recognition of both types of information. However, it is unknown whether this pattern persists over time as the disease progresses.
Participants and Methods:
Participants were 73 participants enrolled in an observational PPA research study at the Mesulam Center for Cognitive Neurology and Alzheimer’s Disease (Mage = 66.75 years, SD = 6.77; Meducation = 16.11 years, SD = 2.38; 51% female). Participants were subtyped as semantic (n = 15), logopenic (n = 27), or agrammatic PPA (n = 31) based on Gorno-Tempini et al., 2011, using 3W3S and other neuropsychological measures as described previously. Participants were followed at 2-year intervals and tests were administered longitudinally. All participants in the current study had 3W3S scores from at least two research visits collected between September 2012 and September 2022.
Results:
There were no significant baseline group differences on 3W3S performance, except for better incidental encoding in the logopenic than the semantic group for shapes (p = .040) and words (p = .043). We then conducted a mixed measures ANOVAs to determine baseline within-person comparisons between words vs shapes. Within individuals, performance on incidental encoding, effortful encoding, and recognition was worse for words than shapes (ps < .01). There was an interaction between material and group for delayed recall (p < .001) such that there was a significantly larger discrepancy between word and shape recall in the semantic (Mdiff = -9.14) compared to logopenic (Mdiff = -3.07) and agrammatic groups (Mdiff = -2.13). Repeated measures ANOVAs determined changes in scores over time collapsed across PPA subtypes. Incidental encoding (ps = <.01), effortful encoding (ps < .05), and delayed recall (ps < .01) declined for both words and shapes over time. Copy and recognition of words (ps < .05), but not shapes declined over time.
Conclusions:
The current results are consistent with prior findings of relative preservation of memory for nonverbal compared to verbal material in PPA as measured by 3W3S, especially in the semantic subtype. Learning and recall of words and shapes declined over time in all groups, whereas there was selective decline in copy and recognition of words compared to shapes. These results provide evidence of differential patterns of decline in certain aspects of memory over time in PPA and highlight the relative preservation of memory in this language-focused dementia even over time.
Sleep is a restorative function that supports various aspects of well-being, including cognitive function. College students, especially females, report getting less sleep than recommended and report more irregular sleep patterns than their male counterparts. Inadequate and irregular sleep are associated with neuropsychological deficits including more impulsive responding in lab-based tasks. Although many lab-based experiments ask participants to report their sleep patterns, few studies have analyzed how potential changes in sleep affect their findings. Utilizing data from a previously collected study, this study aims to investigate relations between sleep (i.e., sleep duration and changes in sleep duration) and performance-based measures of inhibition among female college students.
Participants and Methods:
Participants (n = 39) were majority first-year students (Mage =19.27) and Caucasian (51%). Participants were recruited to participate in a larger study exploring how food commercials affect inhibitory control. Participants were randomized to each study condition (watching a food or non-food commercial) over two visits to the lab (T1 and T2). During both visits, they completed questionnaires asking about their 1) sleep duration the night before and 2) their “typical” sleep duration to capture changes in sleep duration. They also completed a computer-based stop signal task (SST) which required them to correctly identify healthy food images (stop signal accuracy [SSA] healthy) and unhealthy food images (SSA unhealthy) while inhibiting their response during a stop signal delay (SSD) which became increasingly more difficult (or delayed) as they successfully progressed. Since the main aim of the study was to explore the impact of sleep, analyses controlled for study condition. Analyses involving changes in sleep also accounted for sleep duration the night before the study visit.
Results:
On average, students reported being under slept the night before the lab visit, reporting that they got 38 minutes less sleep than their “typical” sleep (7 hrs 3 min). Hierarchical regression analyses demonstrated that sleep duration the night before the lab visit was not associated with inhibition (i.e., SSA unhealthy, SSA healthy, SSD). In contrast, a greater change in sleep, or getting less sleep than “typical,” was associated with worsened inhibition across inhibition variables (SSA healthy, SSA unhealthy, SSD) above and beyond sleep duration at T1. At T2, only one analysis remained significant, such that getting less sleep than “typical” was associated with lower accuracy of appropriately identifying unhealthy images (SSA unhealthy) whereas other analyses only approached statistical significance.
Conclusions:
These findings suggest that changes in sleep, or getting less sleep than typical, may impact inhibition performance measured in a lab, even when accounting for how much sleep they got the night before. Specifically, getting less sleep than typical was associated with reduced accuracy in selecting unhealthy images, a finding that was consistent across two visits to the lab. These preliminary findings offer opportunities for lab-based experiments to investigate the role of sleep when measuring inhibition performance. Further, clinicians conducting neuropsychological assessments in clinical settings may benefit from assessing sleep the night before the appointment and determine if this represents a change from their typical sleep pattern.
Functional connectivity of the default mode network (DMN) during rest has been shown to be different among adults with Mild Cognitive Impairment (MCI) relative to aged-matched individuals without MCI and is predictive of transition to dementia. Post-traumatic stress disorder (PTSD) is also associated with aberrant connectivity of the DMN. Prior work from this group has demonstrated a higher rate of MCI and PTSD among World Trade Center (WTC) responders relative to the general population. The current study sought to investigate the main and interactive effects of MCI and PTSD on DMN functioning. Based on prior work, we hypothesized that MCI, but not PTSD, would predict aberrant connectivity in the DMN.
Participants and Methods:
99 WTC responders aged 44–65 stratified by MCI status (yes/no) and PTSD status (yes/no) and matched for age in years, sex (male vs. female), race (white, black, and other), and educational attainment (high school or less, some college / technical school, and university degree), and occupation on September 11, 2001 (law enforcement vs. other) underwent fMRI using a 3T Siemens Biograph MR scanner. A single 10-minute continuous functional MR sequence was acquired while participants were at rest with their eyes open. Group-level analyses were conducted using SPM-12, with correction for multiple comparisons using AFNI's 3dClustSim. Based on this threshold, the number of comparisons in our imaging volume, and the smoothness of our imaging data as measured by 3dFWHMx-acf, a minimum cluster size of 1134 voxels was required to have a corrected p . .05 with 2-sided thresholding. Spherical 3 mm seeds were placed in the dorsal (4, -50, 26) and ventral (4, -60, 46) posterior cingulate cortex (PCC).
Results:
Individuals with PTSD demonstrated significantly less connectivity of the dorsal posterior cingulate cortex (PCC) with medial insula (T = 5.21), subthalamic nucleus (T = 4.66), and postcentral gyrus (T = 3.81). There was no difference found in this study for connectivity between groups stratified by MCI status. There were no significant results for the ventral PCC seed.
Conclusions:
Contrary to hypotheses that were driven by a study of cortical thickness in WTC responders, the impact of PTSD appears to outweigh the impact of MCI on dorsal DMN connectivity among WTC responders stratified by PTSD and MCI status. This study is limited by several issues, including low number of female and minority participants, relatively small group cell sizes (n = 23–27 per cell), a brief resting state sequence (10 minutes), and lack of a non-WTC control group. Importantly, responders are a unique population so generalizability to other populations may be limited. Individuals in the current study are now being followed longitudinally to relate baseline resting state functional connectivity with cognitive changes and changes in connectivity over a four-year period.
The current study investigated whether older adults’ cognitive test scores at the time of long-term care nursing home admission are associated with psychological well-being over the first six months. We analyzed the link between Mattis Dementia Rating Scale (DRS-2) subscale scores and anxiety, depression, quality of life, and positive/negative affect.
Participants and Methods:
Participants were recently admitted long-term care residents from 13 nursing homes in the Louisville, KY area. Sixty-two older adults were administered the DRS-2 shortly after nursing home admission. Using a cutoff of less than 6 scaled score on the DRS-2, 52% of participants scored as cognitively impaired. Self-report measures of anxiety (RAID), depression (PHQ9), quality of life (QoL-AD), and positive/negative affect (Philadelphia Geriatric Center Affect Rating Scale) were collected at time of admission, and 3 and 6 months later.
Results:
The DRS-2 attention subscale significantly correlated with baseline depression symptoms. No other DRS-2 subscale or the DRS-2 total score correlated with anxiety, depression, quality of life, or affect ratings at admission. Baseline DRS-2 attention, initiation/perseveration, and memory had significant correlations with self-report measures at 3 and 6 months; these DRS-2 scores were selected for further analysis. Mixed ANOVAs found a significant main effect of group (impaired vs. not-impaired) for the initiation/perseveration subscale, memory subscale, and DRS-2 total score on negative affect; impairment in any of these domains was associated with lower reported negative affect at all three time points. There was no significant effect of cognitive scores on any other self-report measure. There was a significant, positive linear trend in quality of life over time. There was a significant quadratic trend in depression symptoms, with decreased depression reported at 3 months and increase at 6 months.
Conclusions:
Impaired performance on the DRS-2 was associated with lower negative affect over time. Cognitive impairment was not associated with anxiety, depression, quality of life, or positive affect. There appear to be reliable trends in some psychological factors regardless of cognitive scores, with an increase in quality of life over time and a temporary decrease in reported depression captured at 3 months. The relationship between cognitive impairment and negative affect should be interpreted with caution, as only 22 residents completed the affect self-report at all three time points. Overall, we found limited evidence of an association between cognitive scores at time of admission and self-reported psychological factors at 3 and 6 months.
As neuropsychologists aim to collect valid data, maximize the utility of assessments, make effective use of time, and best serve patient populations, measurement of performance validity is considered a critical issue for the field. As effort may vary across an evaluation, including performance validity tests (PVTs) throughout the assessment is important. Incorporating embedded PVTs in addition to free standing PVTs can be particularly useful in this regard. COWAT and animal naming are commonly administered verbal fluency measures. While there have been past investigations into their potential for detecting invalid performance, they are limited, and more research is needed. Perhaps most promising, Sugarman and Axelrod (2015) described a logistic regression derived formula utilizing the combined raw scores of COWAT and animal naming. The current study aimed to investigate the use of embedded PVTs within COWAT and animal naming to provide further support for the use of embedded PVTs in these measures.
Participants and Methods:
All subjects were from a mixed clinical sample comprising military veterans from two VA Medical Centers in the northeast U.S., who were referred for neuropsychological evaluation. Subjects deemed credible had zero PVT failures. Subjects were considered non-credible performers if they failed at least two out of a possible eight PVTs administered. Subjects who failed one PVT were excluded from the study (n = 53). The final sample consisted of 116 individuals with credible performance (Mean Age = 35.5, SD = 8.8; Mean Edu = 13.6, SD = 2; Mean Est. IQ = 106, SD = 7.9) and 94 individuals with psychometrically determined non-credible performance (Mean Age = 38.5, SD = 9.4; Mean Edu = 113, SD = 2.1; Mean Est. IQ = 101, SD = 8.7). Performance of COWAT and animals in detecting non-credible performances was evaluated through calculation of classification accuracy statistics and use of the logistic regression formulas reported in Sugarman and Axelrod (2015).
Results:
For COWAT, the optimal cutoff was a raw score of <27 (specificity = 89%; sensitivity = 31%), and a T-score of <35 (specificity = 92%; sensitivity = 31%). For animal naming, optimal cutoffs were <16 for raw score (specificity = 92%, sensitivity = 38%) and <37 for T-score (specificity = 91%; sensitivity = 33%). The logistic regression formula based on raw scores for both COWAT and animal naming was inadequately sensitive at the recommended cutoff in this sample, but a coefficient of > .28 was revealed to be optimal (91% specificity; 42% sensitivity). When the formula for T-scores was used, a coefficient of > .38 was optimal (91% specificity; 28% sensitivity).
Conclusions:
Results of the current research suggest that PVTs embedded within the commonly administered COWAT and animal naming verbal fluency tests can effectively detect low effort, in concordance with generally accepted standards. A logistic regression formula using raw scores in particular appears to be most effective, consistent with findings reported by Sugarman and Axelrod (2015).
Responsive neurostimulation (RNS) is a surgical intervention to reduce the frequency of seizures as an adjunctive therapy for patients with drug-resistant epilepsy (DRE). Presurgical neuropsychological evaluations capture symptoms of anxiety and depression, which occur in higher rates within the epilepsy population than in the general population; however, the effects of mood are commonly overlooked or underappreciated in the conceptualization of cognitive functioning and overall quality of life. Previous studies have shown the effects of attentional control and executive functioning on engagement in meditative states. The present study examines pre and post-meditation self-reported anxiety symptoms and the electrophysiological changes captured intracranially during meditation sessions in patients implanted with an RNS device. This study seeks to utilize presurgical neuropsychological evaluations to explore relationships between cognitive profiles and meditative state changes, and reductions in anxiety.
Participants and Methods:
This study presents a series of 10 patients who underwent RNS device implantation for the treatment of DRE at Mount Sinai Hospital. All patients had at least one contact in the basolateral amygdala. Prior to surgical implantation of the RNS device, all patients completed a comprehensive neuropsychological evaluation based on the NIH Common Data Elements Battery for Epilepsy Patients. Patients in this study completed a 17-and 22-minute meditation protocol based on loving-kindness and Focal Awareness (FA) meditation. Control points and mind-wandering phases were utilized to distinguish the meditative portion of the study during intracranial recordings. All patients completed a pre- and post-meditation questionnaire adapted from the PROMIS Anxiety Short Form as well as self-ratings on meditation depth and satisfaction.
Results:
Presurgical neuropsychological evaluation of patients showed elevated levels of anxiety on the BAI (M = 18.14, SD = 12.03) and depression on the BDI-II (M = 15.57, SD = 6.92). Neuropsychological findings localized to frontal or frontotemporal deficits in 80% of the patients were captured in this study. Regarding lateralization, 50% of patients presented with bilateral weakness on neuropsychological evaluation, with the rest showing unilateral profiles. A negative correlation was observed between patient responses on pre-meditation anxiety measures and self-reported depth of engagement in meditation, r = -0.65, p = .043. When all meditation sessions were evaluated, patients displayed a reduction in anxiety levels pre- and post- meditation, t = 2.3, p = .03.
Conclusions:
Present findings suggest a reduction in anxiety symptoms following completion of a meditation paradigm. Additionally, a relationship between anxiety and depth of engagement in meditation was identified. During each meditation session, electrocorticography data was collected and analyzed. Given the high comorbidities of anxiety and depression as well as cognitive symptoms common for individuals with epilepsy, a systems-based approach may enhance conceptualization of neuropsychological and neuropsychiatric evaluations, which may have a significant clinical impact. Evaluation of neuropsychological profiles, meditation effects, and anxiety in this population may support cross-discipline understanding of cognitive and psychiatric profiles to better inform treatment recommendations.
Sleep deprivation and depressive symptoms have been shown to negatively impact cognitive function within older adult populations (Gilley, 2022; Donovan et al., 2016). However, there is minimal research on interactions between sleep disturbance and depressive symptoms in relation to their shared impact on cognitive impairment. The purpose of this study is to examine possible interactions between sleep disorders and depression and their relationship with cognition among relatively good functioning and healthy older adults.
Participants and Methods:
The sample was obtained from the Memory and Aging Project (Rush Alzheimer's Disease Center, Rush University, 2019) and consisted of 3,345 community dwelling older adults. The study analyzed data from 2552 women (76.3%) and 1093 men (23.7%). The average age of participants was 80 years and ranged from 45 to 98 years old. Measures used included the Berlin Questionnaire (risk for sleep apnea), Center for Epidemiological Studies Depression Scale (CES-D; depression), and a neuropsychological battery (visuospatial ability/perceptual reasoning and processing speed).
Results:
ANOVA analyses exhibited a significant main effect of depression on visuospatial ability/perceptual reasoning (p <.001), processing speed (p <.001), and semantic memory (p <.001). No significant main effect was found for sleep apnea on these cognitive domains. However, when sleep apnea was analyzed between those with any depressive symptoms versus those without, significant interactions were found for visuospatial ability/perceptual reasoning (p =.027), processing speed (p <.001), and semantic memory (p =.016). Sleep apnea symptoms had a greater detrimental effect on visuospatial skills and perceptual reasoning (F=4.90; p=.027) only when any depression symptom is present. In contrast, there was a steeper decline of processing speed when only depressive symptoms were present apart from sleep apnea symptoms (F=10.34; p =.001) Similarly, depressive symptoms had a greater negative effect on semantic memory for older adults who reported no sleep apnea symptoms compare to those who did (F=5.83, p=.016).
Conclusions:
The current study indicated that while sleep apnea was negatively related to several cognitive domains, the impact became greater with the presence of depression on visuospatial skills and perceptual reasoning among older adults. However, the detrimental impact of sleep apnea was somewhat less with the presence of depression for processing speed and semantic memory. This may be due to likely higher endorsements of depressive symptoms compared to sleep apnea symptoms within the study sample. These findings suggest that there are differential interactive effects of sleep impairment and depressive symptoms on cognitive domains among older adults. Considering the relationship that exists between depression and increased disease burden among older adults, it is crucial for clinicians to also take sleep behaviors into account when examining and treating their patients. Clinicians should be mindful of their older patient's sleep health and depression measures when cognitive declines are suspected. They also suggest that cognitive performance may be improved with treating any symptoms of sleep apnea and depression in older adults.
Evidence regarding cognitive impairment following concussion/mild traumatic brain injury (mTBI) has been conflicting. Criticism has focused on what is being measured, how it is being measured, and who is being measured (Pertab et al, 2009; Iverson, 2010). However, literature suggests that clinicians and researchers should examine how individuals complete a task rather than what they achieve (Geary et al, 2011). Studies examining the drawing process used to complete the Rey-Osterrieth Complex Figure Task (RCF) have been inconclusive and methodologically weak. The current study addressed several criticisms and limitations by examining whether observing RCF drawing process, including a novel strategy construct, could support a diagnosis of persisting post-concussive symptoms.
Participants and Methods:
Sixteen individuals with a history of concussion/mTBI and sixteen matched controls (age, sex, IQ) were included in multiple regression analyses to examine whether RCF drawing constructs predict post-concussive symptoms (mean age 43.59 years; 22 female). At least 3 months had passed since the concussive/mTBI event. Post-concussive symptoms were assessed with the Rivermead Post-Concussive Symptoms Questionnaire (RPCSQ) and the Mental Fatigue Scale (MFS). Separate regression analyses were conducted for each scale. Predictor variables were statistically selected from a catalogue of 4 RCF drawing process constructs - Wholeness, Order, Continuation and Strategy; 15 traditional measures of cognitive function; and 3 psychological state measures. 17 variables were included in the model for the RPCSQ, including Order and Strategy. 18 variables were included for the MFS, including Order, Continuation and Strategy.
Results:
Order scores were found to be one of the strongest predictors of RPCSQ scores (B = -2.06; ß= 0.20), and MFS scores (B = -1.54, ß = 0.26). Individuals drawing fewer core elements at the start of the drawing process were found to report more post-concussive symptoms. Participants who observed a stronger temporal-spatial strategy heuristic, as measured by the Strategy construct, reported more symptoms, particularly mental fatigue (RPCSQ: B = 0.49, ß = 0.09; MFS: B = 0.58, ß = 0.19). Continuation was also found to be predictive of MFS scores (B = -0.24, ß = -0.14), such that the fewer continuation points that were observed, the greater the MFS score.
Conclusions:
Two constructs of RCF drawing process - Order and Strategy - were found to predict persisting post-concussive symptoms generally, and mental fatigue specifically. Continuation was also found to predict mental fatigue. Such findings provide a cognitive explanation for patient reports of mental fatigue following concussion - recognised as the most common and persistent symptom. Strict adherence to a temporal-spatial strategy may indicate cognitive inflexibility - a theory supported by the inclusion and influence of other cognitive tasks in the regression models that rely on cognitive flexibility. Individuals exert more effort to shift between perceptual planes and to override global bias, thereby expending cognitive resources more quickly and to a greater extent. These findings provide a credible explanation for the lack of evidence of cognitive impairments in previous research, where neuropsychological tasks focus on attainment rather than process. These findings highlight the clinical importance of assessing cognitive dysregulation, specific cognitive processes and cognitive deficits post-concussion/mTBI.
The objective of this study is to explore the impact on the mental health of caregivers of people with dementia during the period of mandatory preventive social isolation (ASPO) and to study which of these factors were predictors of caregiver overload.
Participants and Methods:
During the first 3 months of the ASPO (June 2020 to september 2020). A sample of 112 caregivers (75.89% female; age 58.65 ± 14. 30) of patients with dementia from a Memory Center answered, remotely (online or telephone) a survey with the following questionnaires: the Zarit Caregiver Overload Scale (ZBI), Weekly hourly load dedicated to the care of patients with dementia), the use of time in unpaid activities through an activity diary, provided by Argentine National Institute of Statistics and Census (INDEC), the Caregiver Activities Survey (CAS) and the Anxiety, Depression and Stress Scale (DASS-21). These questionnaires evaluate the conditions and characteristics of caregiving tasks and their impact on the caregiver in the context of ASPO. Additionally, it was recorded whether the person with dementia, the caregiver, or persons living with them had had COVID-19.
Results:
Descriptively, a disparity in frequency was observed in the gender of caregivers of persons with dementia, i.e., caregiving is inequitably distributed between men (24.11%) and women (75.89%). This difference hinders direct comparison between men and women. A regularized L2 regression was performed for the identification of predictors of caregiver overload identifying the number of caregiving hours (β=0.090), DAS depression (β=0.085), DASS anxiety (β=0.099) DASS stress (β=0.164), fear of Covid (0.141) and lower patient cognitive performance according to MMSE (β=-0.41) and to lesser extent sex as the greatest contributors to patient overload. Additionally, a mediation analysis was performed in which the factors number of caregiving hours (CAS; r= 0.254,r= 0.292,r= 0.252,r= 0.252,r= -0.37), being a primary caregiver and fear of Covid-19 (r= 0.335,r= 0.432,r= 0.402,r= -0.496) were found to be mediators of the effect between anxiety, depression, stress (DASS) and overload (ZBI).
Conclusions:
Caregivers of patients with dementia have suffered sequelae such as anxiety, stress, depression, and overload (caregivers’ burden) in the context of the COVID-19 virus spread and during mandatory preventive social isolation. Being a primary caregiver, dedicating more hours to caregiving, and fear of Covid-19 are factors that contribute significantly to caregiver burden and mediate between this burden and mood variables. Public policies to support caregivers and information about the disease could modify these variables and reduce caregiver burden.
The current study aimed to evaluate the psychometric properties and diagnostic accuracy of the 32-item version of the Multilingual Naming Test (MINT) in a sample of English and Spanish monolinguals and bilingual older adults from two ethnic groups (EA; European Americans and HA; Hispanic American) with typical and atypical aging. An IRT model was used to identify 24 MINT Items assessed across ethnicity and language testing groups (Spanish and English). We analyzed the discriminant and predictive validity of the 32-item and 24-item scales across diagnostic groups (cognitively normal [CN], mild cognitive impairment [MCI], and dementia [AD]). Diagnostic accuracy was then assessed with both versions applying ROC (Receiver Operating Characteristics) curve reporting using AUC (Area Under the Curve). We expected the MINT to distinguish between the CN and AD groups but not between CN and MCI and the MCI and AD. We conducted IRT analyses to evaluate the cross-language validity of the items from the 32-item MINT in English and Spanish through Rasch Analysis across our two ethnic groups. Finally, we tested the association between MINT scores and MRI volumetric measures of language-related areas in both cerebral hemispheres’ temporal and frontal lobes.
Participants and Methods:
The sample comprised 281 participants (178 females) enrolled in the 1Florida Alzheimer’s Disease Research Center (ADRC), with 175 participants self-identified as HA (51 tested in English and 124 in Spanish) and 106 EA, all of them monolingual English speakers. The participants were classified into three diagnostic groups: 1. CN (n = 94); 2. MCI (n = 148); and 3. AD (n = 39). Participants are evaluated yearly through a comprehensive neuropsychological battery, including the MINT is a standard CN task that requires patients to retrieve words upon presentation of a line drawing.
Results:
We obtained a ceiling effect in four items (Butterfly, Glove, Watch, and Candle). Four items were easier in English (Blind, Gauge, Porthole, and Pestle) and four in Spanish (Dustpan, Funnel, Anvil, and Mortar). In the 32-item version of the MINT, EA scored significantly higher than HA, but when removing those eight items, the ethnic difference was attenuated and no longer statistically significant (controlling for education). The ROC curves showed that both versions of the MINT had poor accuracy when identifying CN participants and were acceptable in identifying dementia participants but unacceptable for classifying MCI participants. The 32-item MINT in English and Spanish and the 24-item MINT in Spanish were significantly correlated with the bilateral MTG. However, the 24-item MINT in English was only correlated with this area’s volume in the right hemisphere. The left FG correlated with MINT scores regardless of language and MINT version. We also found some differential correlations depending on the language of administration. The bilateral hippocampi, STG, MTG and FG, and right ITG were significantly correlated only with MINT Spanish scores, while the left ITG was significant only when either version of the MINT was administered in English.
Conclusions:
Our results highlight the importance of analyzing cross-cultural samples when implementing neuropsychological tests.
Stroke results in various cognitive and motor impairments. The most frequent cognitive problem is spatial and non-spatial attention, typically caused by unilateral brain lesion. Attention is typically assessed with several different paper-and-pencil tests, which have long been criticized for their lack of theoretical basis, their limited ecological validity to deficits experienced in daily life, and their lack of measurement sensitivity (Appelros et al., 2004; Azouvi, 2017). Here, our global aim was to develop an innovative integrative serious game in an immersive environment. The REASmash, combines the evaluation of spatial attention, non-spatial attention, and motor performance. We present the spatial and non-spatial cognitive attention evaluation results.
Participants and Methods:
Eighteen first stroke individuals and 40 age-match healthy controls were assessed on the REASmash. They were instructed to find a target mole presented amongst distractor moles. The stimulus array consisted of a grid of 6 columns and 4 rows of molehills, from which the target and 11, 17 and 23 distractors moles could randomly appear, in two search conditions (single feature condition and saliency condition). Responses were made with the ipsilesional hand for individuals with stroke and with the dominant hand for the healthy controls. Participants were evaluated also with two standardized clinical tests of attention; the hearts cancellation task of the Oxford Cognitive Screen, and the visual scanning subtest of the Test for Attentional Performance.
Results:
Validation results showed significant and strong correlations between the REASmash and the two reference tests, with the REASmash showing high sensitivity and specificity (i.e., the correct identification of the post-stroke vs. control individuals). The REASmash also showed significant and strong test/re-test reliability. We additionally evaluated user experience using the UEQ, and the results showed excellent attractiveness and novelty, and good stimulation and efficiency.
Conclusions:
In conclusion, the REASmash is a novel immersive virtual environment serious game that is valid, sensitive, and usable. It provides a new diagnosis measure spatial and non-spatial attention impairment.
Spinocerebellar ataxia type one (SCA1) is an autosomal dominant neurodegenerative disease caused by an expanded CAG repeat that encodes glutamine (polyQ) in the affected ATXN1 gene. SCA1 pathology is commonly characterized by the degeneration of the cerebellar Purkinje cells (PC) and brainstem. Symptoms include motor dysfunction, cognitive impairments, bulbar dysfunction, and premature death. Atxn1175Q/2Q knock-in mice were previously developed to model SCA1 by inserting 175 expanded CAG repeats into one allele of the Atxn1 gene, producing mice expressing ATXN1 throughout the brain and displaying SCA1 symptoms. Previous research has indicated the role of localization of the ATXN1 protein to the nucleus in pathology. Therefore, the Atxn1175QK772T/2Q mouse model was created by disrupting the NLS in the expanded Atxn1175Q/2Q mice by replacing lysine with threonine at position 772 in the nuclear localization sequence (NLS). Since this amino acid change previously blocked PC disease in another mouse model, the Atxn1175QK772T/2Q mice were created to examine how the NLS mutation affects neuronal cells. RNA sequencing analysis was previously performed and found differentially expressed genes (DEG) with Atxn1175Q/2Q downregulated compared to Atxn1175QK772T/2Q and Atxn12Q/2Q in the cerebellum, medulla, cortex, hippocampus, and striatum. The aim was to analyze these brain regions to validate the RNAseq differential gene expression at a protein level.
Participants and Methods:
Therefore, western blots were performed on the following mouse models (n=12): wild type mice (Atxn12Q/2Q), mice with the nuclear localization sequence mutation (Atxn12QK772T/2Q), and mice with 175 expanded CAG repeats (Atxn1175/2Q). Based off the RNAseq data, the cerebellum was tested with ion channel genes (Cav3.1, Kcnma1, and Trpc3) and the striatum was tested with a gene found in medium-spiny neurons (DARPP-32).
Results:
In the cerebellum, Atxn1175/2Q was significantly downregulated compared to Atxn1175QK772T/2Q in Cav3.1, Trpc3, and Kcnma1. Atxn1175Q/2Q was significantly downregulated compared to Atxn12Q/2Q in Trpc3 and Kcnma1. Atxn1175QK772T/2Q was significantly downregulated compared to Atxn12Q/2Q in Trpc3. In the striatum, there was significantly reduced DARPP-32 expression found between Atxn12Q/2Q and Atxn1175QK772T/2Q, Atxn12Q/2Q and Atxn1175Q/2Q, and Atxn1175Q/2Q and Atxn1175QK772T/2Q.
Conclusions:
Therefore, the significantly reduced gene expression at the protein level in the cerebellum and striatum validate RNAseq differentially expressed genes. Additionally, the downregulation of both the Atxn1175Q/2Q and Atxn1175QK772TQ/2Q compared to Atxn12Q/2Q in the striatum supports the lack of learning of those mouse models on the rotarod, suggesting that the nuclear localization mutation does not rescue learning. Interestingly, the downregulation of Atxn1175Q/2Q compared to Atxn1175QK772TQ/2Q likely supports the age-related motor decline rescue in the rotarod seen in Atxn1175QK772T/2Q and not Atxn1175Q/2Q.
Recent consensus guidelines have advocated for the use of multivariate performance validity assessment on ability-based measures such those used in neuropsychological assessment. Further, previous research has demonstrated that aggregating performance validity indicators may produce superior classification accuracy. The present study builds upon this research by aggregating data from three of the most commonly used performance validity measures (Test of Memory Malingering [TOMM], Rey Fifteen Item Test with recognition trial [FIT plus recognition], and Reliable Digit Span [RDS]) to create a performance validity composite measure in a veteran mild traumatic brain injury (mTBI) population.
Participants and Methods:
Data of patients evaluated at a VA hospital who had completed the RDS, FIT plus recognition, and TOMM as part of their clinical neuropsychological evaluation were analyzed (n = 20). Two composite performance validity indexes were created: a Single Cutoff Performance Validity Index (SC-PVI), which measures the quantity of failures across performance validity measures (PVMs) by summing the total number of PVM failures, and a Multiple Cutoff Performance Validity Index (MC-PVI) which measures the number of failures as well as degree of failure(s) across measures of performance validity (e.g., a participant would attain a score of 3 if their PVM performance failed to reach a conservative cut point; they would obtain a score of 1 if they met conservative cut point, yet failed to reach a liberal cut point).
Results:
Only one participant (5%) attained a score of 0 on the SC-PVI (i.e., passing all PVTs using standard cutoffs) and MC-PVI (i.e., passing the most liberal cut points on all three PVMs). Conversely, eight participants (40%) attained a score of 3 on the SC-PVI (i.e., failed all three PVMs) and four participants (20%) attained a score of 9 (i.e., failed the most conservative cut points on all three PVMs). Results showed a significant (p < .001) ordinal association between the two indices (G = .984); however, there was no significant agreement between SC-PVI and MC-PVI models (k = -.087; p = .127).
Conclusions:
Data revealed discordant findings between the three PVMs utilized. The majority of participants (75%) scored between 2-8 on the MC-PVI, meaning that they did not exceed all liberal cut points or fail all conservative cut points. These “grey area” scores suggest an indeterminate range of performance validity, which cannot be captured by a solitary cut point or neatly classified as pass or fail. The utility of multiple cutoff performance validity models (i.e., aggregating PVMs to consider the severity of failure and number of failures) is that they capture the nuance of these data when determining and discussing the credibility of a profile. Multiple cut point data also highlight how the choice of cutoff influences the outcome of performance validity research and clinical decision making. As such, future research on the classification accuracy of this MC-PVI is needed.
Compensatory strategy training has been identified as a useful mechanism to improve everyday cognitive function among older adults with Mild Cognitive Impairment (MCI). Despite this, few studies have looked at cognitive factors that support adherence and engagement in these programs, which are key to maximizing benefit. The present study aimed to evaluate the relationship between cognition, adherence, and engagement during a group-based compensatory strategy training for people with MCI. We hypothesized individuals with better memory and executive function performance would show better adherence and higher engagement scores in cognitive training classes.
Participants and Methods:
Twenty-five participants enrolled in Emory University's Charles and Harriet Schaffer Cognitive Empowerment Program (CEP) completed an 11-week compensatory strategy training group (CEP-CT). CEP-CT is adapted from Ecologically Oriented Neurorehabilitation to be suitable for people with MCI. Participants enrolled were on average 74.3 years old (SD= 5.4), 52% Male, primarily Caucasian (80%; 16% African American), and college educated (M= 16.5 years; SD= 2.7). All participants received clinical diagnoses of MCI prior to enrollment in the program. Participants completed multiple cognitive measures, including Montreal Cognitive Assessment (MoCA), Hopkins Verbal Learning Test (HVLT), Trail Making Test A & B (TMT), Number Span Forward (NSF) and verbal fluency (S-words and Animals). For all group sessions, class attendance (present vs. not present) was recorded for each participant and their care partner, and engagement ratings for participants were recorded by the facilitator on a 1 to 5 scale (higher scores indicate better engagement). Outcomes include adherence to cognitive training (percentage of sessions attended; M= 82% class attendance, SD= 18%) as well as the average engagement ratings across 11 weeks (M= 3.25, SD= .40).
Results:
Bivariate Pearson correlations revealed that individuals who attended more classes also demonstrated better engagement in class, r= .44, p= .03. Class attendance was significantly related to performance on measures of memory and executive function (HVLT: r= -.42, p= .04; TMT-B: r= .69, p= .04), such that participants who performed worse on these measures attended more CEP-CT classes. Average engagement ratings were unrelated to cognitive performance.
Conclusions:
Results did not support initial hypotheses, and instead indicate individuals with poorer performance on measures of memory and executive function had better adherence to CEP-CT classes, as measured by attendance. These results may indicate individuals experiencing cognitive difficulties are more likely to attend cognitive training classes. Subjective engagement ratings were unrelated to cognition; however, individuals who attended more sessions were more engaged in cognitive training classes. Future areas of research include objective measurement of class engagement as well as the incorporation of nuanced adherence metrics to further elucidate the relationship between these factors and cognition in MCI.
Differences in adaptive functioning present early in development for many children with monogenic (Down Syndrome, Fragile X) and neurodevelopmental disorders. At this time, it is unclear whether children with ACC present with early adaptive delays, or if difficulties emerge later as functional tasks become more complex. While potential delays in motor development are frequently reported, other domains such as communication, social and daily living skills are rarely described. We used a prospective, longitudinal design to examine adaptive behavior from 6-24 months in children with ACC and compared their trajectories to those with monogenic and neurodevelopmental conditions.
Participants and Methods:
Our sample included children with primary ACC (n= 27-47 depending on time point) whose caregivers completed the Vineland Adaptive Behavior Scales-Interview 3rd Edition, via phone at 6, 12, 18 and 24 months. Comparison samples (using the Vineland-2) included children with Down Syndrome (DS; n = 15-56), Fragile X (FX; n = 15-20), children at high familial likelihood for autism (HL-; n=192-280), and low likelihood (LL; no family history of autism and no developmental/behavioral diagnosis; n = 111196). A subset of the HL children received an autism diagnosis (HL+; n = 48-74). The DS group did not have an 18-month Vineland.
Results:
A series of linear mixed model analyses (using maximum likelihood) for repeated measures was used to compare groups on three Vineland domains at 6, 12, 18 and 24 month timepoints). All fixed factors (diagnostic group, timepoint, and group X timepoint interaction) accounted for significant variance on all Vineland domains (p < .001). Post hoc comparisons with Bonferroni-correction examined ACC Vineland scores compared to the other diagnostic groups at each timepoint. At 6 months, parent-ratings indicated the ACC group had significantly weaker skills than the LL group in Communication and Motor domains. At 12, 18 and 24 months, ratings revealed weaker Communication, Daily Living and Motor skills in the ACC group compared to both the LL and HL- groups. Compared to the other clinical groups, the ACC group had stronger Socialization and Motor skills than Fragile X at 6 months, and at 24 months had stronger Communication and Socialization skills than both the DS and FX groups, as well as stronger Socialization than the HL+ group.
Conclusions:
Compared to children with low likelihood of ASD, children with primary ACC reportedly have weaker Communication and Motor skills from 6 to 24 months, with weakness in Daily Living Skills appearing at 12 months and all differences increase with age. Compared to Fragile X, the ACC exhibited relative strengths in socialization and motor skills starting at 6 months. By 24 months, the ACC group was outperforming the monogenic groups on Socialization and Communication. In general, the ACC scores were consistent with the HL+ sample, except the ACC group had stronger Social skills at 18 and 24 months. The results clearly inform the need for early intervention in the domains of motor and language skills. Additionally, as we know that children with ACC are at increased risk for social difficulties, research is needed both using more fine-grained social-communication tools, and following children from infancy through middle childhood.
Routine cognitive screening in the elderly may facilitate earlier diagnosis of neurodegenerative diseases and access to care and resources for patients and families. However, despite growing rates of Alzheimer's and related disorders (ADRD), the availability and implementation of cognitive screening for older adults in the US remains quite limited. Remote cognitive assessment via smartphone app may reduce several barriers to more widespread screening. We examined the validity of a remote app-based cognitive screening protocol in healthy older adults by examining remote task convergence with standard-person assessments and cerebral amyloid (Aß) status as an AD biomarker.
Participants and Methods:
Participants (N =117) were cognitively unimpaired adults aged 60-80 years (67.5% female, 88% White, 75% education > 16 years). A portion had Aß PET imaging results available from prior research participation [(Aß positive (Aß+) n =26, and Aß negative (Aß-) n = 44]. A modified Telephone Interview for Cognitive Status (TICSm) cutoff score of >34 was used to establish unimpaired cognition. Participants completed 8 consecutive assessment days using Mobile Monitoring of Cognitive Change (M2C2), a smartphone app-based testing platform developed as part of the National Institute of Aging's Mobile Toolbox initiative. Brief (i.e., 3-4 minute) M2C2 sessions were assigned daily within morning, afternoon, and evening time windows. Tasks included measures of visual working memory (WM), processing speed (PS), and episodic memory (EM) (see Thompson et al., 2022). Participants then completed a battery of standard neuropsychological assessments in-person at a follow-up visit.
Results:
Participants completed 22.6 (SD = 2.6) out of 24 assigned sessions (3 sessions x 8 days) on average. Performance on all M2C2 tasks decreased significantly with age. Women performed significantly better on WM and EM tasks relative to men. There were no detectable significant differences in performance by race or education. Shorter mean reaction time on M2C2 PS trials predicted faster Trails A and B completion (ß = .26, p < .01, 95% CI [3.8, 23.3] and ß = .20, p < .05, 95% CI [.23, 6.8], respectively). Greater mean M2C2 WM accuracy predicted longer maximum backward digital span (ß = .24, p = .01, 95% CI [.02, .16]). Greater mean M2C2 EM accuracy predicted stronger Logical Memory delayed recall (ß = .33, p < .001, 95% CI [.004, .012]) and total immediate recall on the Free and Cued Selective Reminding Test (ß = .19, p < .05, 95% CI [.000, .006]). Moreover, EM significantly distinguished Aß- and Aß+ individuals (t (68) = 3.0, p < .01) with fair accuracy (AUC = .72).
Conclusions:
Mean performance across 8-days on each M2C2 task predicted same-domain cognitive task performance on a standard assessment battery, with medium effect sizes. Performance on the EM task was also sensitive to cerebral Aß status, consistent with subtle memory changes implicated in the preclinical stage of AD. These findings support the validity of this remote testing protocol in healthy older adults, with implications for future efforts to facilitate accessible and sensitive cognitive screening for early detection of ADRD. Limitations include the restricted generalizability of this primarily white and college educated sample.