We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Intraindividual variability (IIV) is defined as fluctuations in an individual’s cognitive performance over time1. IIV has been identified as a marker of neurobiological disturbance making it a useful method for detecting changes in cognition among cognitively healthy individuals as well as those with prodromal syndromes2. IIV on laboratory-based computerized tasks has been linked with cognitive decline and conversion to mild cognitive impairment (MCI) and/or dementia (Haynes et al., 2017). Associations between IIV and AD risk factors including apolipoprotein (APOE) ε4 carrier status, neurodegeneration seen on brain imaging, and amyloid (Aß) Positron emission tomography (PET) scan status have also been observed1. Recent studies have demonstrated that evaluating IIV on smartphone-based digital cognitive assessments is feasible, has the capacity to differentiate between cognitively normal (CN) and MCI individuals, and may reduce barriers to cognitive assessment3. This study sought to evaluate whether such differences could be detected in CN participants with and without elevated AD risk.
Participants and Methods:
Participants (n=57) were cognitively normal older adults who previously received an Aß PET scan through the Butler Hospital Memory and Aging Program. The sample consisted of primarily non-Hispanic (n=49, 86.0%), White (n=52, 91.2%), college-educated (M=16.65 years), females (n=39, 68.4%). The average age of the sample was 68 years old. Approximately 42% of the sample (n=24) received a positive PET scan result. Participants completed brief cognitive assessments (i.e., 3-4 minutes) three times per day for eight days (i.e., 24 sessions) using the Mobile Monitoring of Cognitive Change (M2C2) application, a mobile app-based cognitive testing platform developed as part of the National Institute of Aging’s Mobile Toolbox initiative (Sliwinski et al., 2018). Participants completed visual working memory, processing speed, and episodic memory tasks on the M2C2 platform. Intraindividual standard deviations (ISDs) across trials were computed for each person at each time point (Hultsch et al., 2000). Higher ISD values indicate more variability in performance. Linear mixed effects models were utilized to examine whether differences in IIV existed based on PET scan status while controlling for age, sex at birth, and years of education.
Results:
n interaction between PET status and time was observed on the processing speed task such that Aß- individuals were less variable over the eight assessment days compared to Aß + individuals (B= -5.79, SE=2.67, p=.04). No main or interaction effects were observed on the visual working memory task or episodic memory task.
Conclusions:
Our finding that Aß- individuals demonstrate less variability over time on a measure of processing speed is consistent with prior work. No associations were found between IIV in other cognitive domains and PET status. As noted by Allaire and Marsiske (2005), IIV is not a consistent phenomenon across different cognitive domains. Therefore, identifying which tests are the most sensitive to early change is crucial. Additional studies in larger, more diverse samples are needed prior to widespread clinical use for early detection of AD.
The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) is a well validated and reliable clinical assessment tool that can be used for characterizing cognitive function in older adults. The RBANS has been shown to reliably discriminate between Alzheimer’s disease (AD), mild cognitive impairment (MCI), and cognitively healthy (CH) individuals. While the RBANS has traditionally been administered in a face to face setting, administration is also feasible via telehealth. Due to the COVID-19 pandemic, cognitive assessments were unexpectedly moved to telehealth formats. Given this, the current study assessed whether differences emerged between face to face and telehealth RBANS scores in both individuals who were CH and had MCI.
Participants and Methods:
A total of 61 individuals (NCH = 27, NMCI = 34) completed baseline and 1-year follow-up visits in the current study. The sample was predominantly female (N = 43, 70.5%), identified as white (N = 57, 93.4%), and were well educated (MYears = 15.93). Participants completed the RBANS form B at an in-person baseline visit and form C at a one year follow-up visit. Higher RBANS scores indicate overall better cognitive performance. As expected, CH individuals performed better than those with MCI on immediate memory, language, attention, delayed memory, and total score. There were no significant differences found for the visuospatial index. Repeated measures ANOVAs were conducted to assess whether differences in RBANS performance existed based on test administration method.
Results:
Group differences between testing formats were observed in CH individuals on immediate memory [F(1,37) =9.10, p < .01)], language [F(1, 37)=9.41=p < .01)], and total score [F(1,37)=6.56, p < .05], with higher performance in those who completed the followup session in-person.There were no differences in baseline performance on any RBANS index between those who received an in person versus telehealth format (p’s > .05). No differences were observed in the MCI group. There were no significant differences observed between the CH and MCI group on demographic factors.
Conclusions:
Results from the current study suggest that CH counterparts experienced a greater degree of difference in scores between testing formats, whereas individuals with MCI did not. The lack of difference in MCI individuals may be due to less room for variability over time for this group given already low scores. These results suggest that while telehealth has been shown to be a viable option for RBANS administration in some samples, further work needs to be conducted regarding the equivalence of in-person vs. telehealth formats. This study is not without limitations. The small MCI group was segmented into in-person and telehealth groups, further reducing power to detect statistically significant results. The sample was also homogenous with highly educated, Caucasian women. Future research should aim to assess a larger, more diverse sample to identify whether RBANS is a reliable measure alone for assessing cognitive change over time via telehealth for MCI.
To show enhanced psychometric properties and clinical utility of the modified Mini-Mental State Examination (3MS) compared to the Mini-Mental State Examination (MMSE) in mild cognitive impairment (MCI).
Design:
Psychometric and clinical comparison of the 3MS and MMSE.
Setting:
Neuropsychological clinic in the northeastern USA.
Participants:
Older adults referred for cognitive concerns, 87 of whom were cognitively intact (CI) and 206 of whom were diagnosed with MCI.
Measurements:
The MMSE, the 3MS, and comprehensive neuropsychological evaluations.
Results:
Both instruments were significant predictors of diagnostic outcome (CI or MCI), with comparable odds ratios, but the 3MS explained more variance and showed improved classification accuracies relative to the MMSE. The 3MS also demonstrated greater receiver operating characteristic area under the curve values (0.85, SE = 0.02) compared to the MMSE (0.74, SE = 0.03). Scoring lower than 95/100 on the 3MS suggested MCI, while scoring lower than 28/30 on the MMSE suggested MCI. Additionally, compared to the MMSE, the 3MS shared more variance with neuropsychological composite scores in Language and Memory domains but not in Attention, Visuospatial, and Executive domains. Finally, 65.5% MCI patients were classified as impaired (scoring ≤1 SD below the mean) using 3MS normative data, compared to only 11.7% of patients who were classified as impaired using MMSE normative data.
Conclusions:
Broadly speaking, our data strongly favor the widespread substitution of the MMSE with the 3MS in older adults with concerns for cognitive decline.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.