We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Episodic memory functioning is distributed across two brain circuits, one of which courses through the dorsal anterior cingulate cortex (dACC). Thus, delivering non-invasive neuromodulation technology to the dACC may improve episodic memory functioning in patients with memory problems such as in amnestic mild cognitive impairment (aMCI). This preliminary study is a randomized, double-blinded, sham-controlled clinical trial to examine if high definition transcranial direct current stimulation (HD-tDCS) can be a viable treatment in aMCI.
Participants and Methods:
Eleven aMCI participants, of whom 9 had multidomain deficits, were randomized to receive 1 mA HD-tDCS (N=7) or sham (N=4) stimulation. HD-tDCS was applied over ten 20-minute sessions targeting the dACC. Neuropsychological measures of episodic memory, verbal fluency, and executive function were completed at baseline and after the last HD-tDCS session. Changes in composite scores for memory and language/executive function tests were compared between groups (one-tailed t-tests with a = 0.10 for significance). Clinically significant change, defined as > 1 SD improvement on at least one test in the memory and non-memory domains, was compared between active and sham stimulation based on the frequency of participants in each.
Results:
No statistical or clinically significant change (N-1 X2; p = 0.62) was seen in episodic memory for the active HD-tDCS (MDiff = 4.4; SD = 17.1) or sham groups (MDiff = -0.5; SD = 9.7). However, the language and executive function composite showed statistically significant improvement (p = 0.04; MDiff = -15.3; SD = 18.4) for the active HD-tDCS group only (Sham MDiff = -5.8; SD = 10.7). Multiple participants (N=4) in the active group had clinically significant enhancement in language and executive functioning tests, while nobody in the sham group did (p = 0.04).
Conclusions:
HD-tDCS targeting the dACC had no direct benefit for episodic memory deficits in aMCI based on preliminary findings for this ongoing clinical trial. However, significant improvement in language and executive function skills occurred in response to HD-tDCS, suggesting HD-tDCS in this configuration has promising potential as an intervention for language and executive function deficits in MCI.
Nationally and internationally, life expectancies continue to increase, and so are age-related cognitive impairment (Stough et al., 2015). As a result, the rise of consumer dietary supplement products has increased significantly in the last 20 years as older adults are inundated with media related to improving cognitive function. However, research on these nootropics and natural supplements reveals mixed efficacy (Brownie, 2009; Stough et al., 2015). Serendipitously, the COVID-19 pandemic has unveiled the necessity for increased healthy equity and screening internationally across health service disciplines (Jensen et al., 2021; Wells & Dumbrell, 2006). This poster encompasses two primary aims: (1) to highlight common international dietary supplements marketed to an older adult population. Next, (2) develop best evidenced-based practices to approach effective use of nutritional information while maintaining a neuropsychologist’s scope of care.
Participants and Methods:
A literature review was conducted of peer review articles from 2006 to 2022 from the following databases PubMed and Google Scholar. Recommendations were constructed based on identifying and analyzing the emerging themes across identified articles. Keywords include neuropsychology, nootropics, natural supplement use, aging, Vitamin E, Vitamin D, Phosphatidylserine, feedback, cognition, older adults, and international.
Results:
Although supplement use and regulations may differ by country, current research suggests increased supplement use and inquiries to neuropsychologists (Armstrong & Postal, 2013; Aysin et al., 2021). Literature highlighting the benefit of taking a natural dietary supplement for older adults across the spectrum of cognitive decline has been variable (Brownie, 2009; Haider et al., 2020). Commonly explored vitamins such as Vitamin E, Vitamin D, Vitamin B 12, and Phosphatidylserine have proven to be beneficial in improvements in cognitive domains such as attention and memory for those experiencing mild cognitive impairment (Kang et al., 2022; La Fata et al., 2014; Richter et al., 2013; Van Der Shaft et al., 2013). Therefore, one alternative for defining the utility of supplement usage may be from preventive lens for those with mild or emerging cognitive concerns (Health Quality Ontario., 2013; Joshi & Practice-, 2012). Alternatively, older populations are at risk for malnutrition, which can negatively impact cognition (Well & Dumbrell, 2006).
Conclusions:
While recognizing their clinical scope, informed neuropsychologists must be up to date on emerging literature on the efficacy of these supplements. Neuropsychologists should consider following these general guidelines when discussing recommendations for older adult clients with varying degrees of cognitive impairment. For example, neuropsychologists should approach alternative treatments as an exploration of the possible risks, costs, and benefits with evidence-based research while balancing the client’s need for hope (Armstrong & Postal, 2013). Neuropsychologists should also have increased awareness of malnutrition screening amongst this population (Gestuvo & Hung, 2012; Wells & Dumbrell, 2006). Other practices should include ongoing consultation and referral to a nutritionist or following up with their primary care physicians to assist further. With these guidelines, Neuropsychologists can be better equipped to provide ethical recommendations to facilitate clients to become informed consumers.
Aging older adults and individuals with mild cognitive impairment (MCI) experience changes in ability to self-monitor errors. Difficulties with accurate self-monitoring of errors can negatively impact everyday functioning. Without proper error recognition, individuals will continue to make mistakes and not implement compensatory strategies to prevent future errors. A modified Sustained Attention to Response Task (SART; Robertson et al., 1997) has previously been used to assess self-monitoring by the number of errors individuals were able to recognize. The current study sought to examine the relationship of this laboratory-based error-awareness task with everyday functional abilities as assessed by informants and with real-world error-monitoring. We hypothesized that self-monitoring would be significantly related to real-world error-monitoring and everyday functional abilities.
Participants and Methods:
135 community-dwelling participants (110 healthy older adults (HOA) and 25 individuals with MCI) were included from a larger parent study (mean age = 67.73, SD = 8.89). A modified SART was used to measure error-monitoring and create a self-monitoring variable by dividing accurately recognized errors by the total number of errors. Participants also completed simple and complex everyday tasks of daily living (e.g., making lemonade, cooking oatmeal, cleaning, filling medication pillbox) in a university campus apartment. Examiners coded both number of errors committed and self-corrections that were made during task completion. To examine real world error awareness, total self-correct errors were divided by the total number of errors. Knowledgeable informants (KI) completed the Everyday Cognition (ECog) scale, where they rated the participant on domains of memory, language, spatial abilities, planning, organization, and divided attention, to capture changes in everyday function. Pearson correlations were used to examine the relationship between SART self-monitoring and real-world error-monitoring, and changes in everyday functions as rated by their informants.
Results:
As self-monitoring scores on the SART increased, so too did real-world error awareness scores, r(133) = .18, p = .04. Higher self-monitoring scores on the SART were also significantly positively associated with functional performance abilities on the Ecog total (r(96) = -.24, p = .02). Further, higher self-monitoring on the SART was related to better functional performance within the Ecog domains of everyday memory (r(96) = -.23, p = .02), everyday language (r(96) = -.24, p = .02), everyday spatial abilities (r(96) = -.23, p = .02), and everyday planning (r(96) = -.21, p = .04). SART self-monitoring was not significantly related to everyday organization or divided attention domains.
Conclusions:
The findings revealed that better error-monitoring performance on a laboratory-based task was related to better error-monitoring when completing real-world activities, and less overall impairment in everyday function as reported by informants. Results support the ecological validity of the SART error-monitoring score and suggest that error-monitoring performance on the modified SART may have important clinical implications in predicting real-world error-monitoring and everyday function. Future research should consider how SART error-monitoring may predict everyday functioning, over and above other clinical measures.
In the wake of the national controversy over demographically corrected normative comparisons used in neuropsychological assessment, the field finds itself in need of adopting better practices and providing stronger instruction in norm selection and application when assessing underrepresented populations. Neuropsychologists must employ critical thinking within their clinical decision-making that takes into account patient demographics, analysis of the measures themselves, normative samples, and statistical adjustments employed in normative studies. Not doing so may result in erroneous diagnostic conclusions, exposing underserved patient populations to poor or harmful clinical care and even misdiagnosis. The following case series presents several demographic considerations illustrating how selection and application of different (at times, ill-fitting) normative reference groups can affect treatment outcomes in the Latinx community. We examined the performance of various published norms when applied to monolingual and bilingual Spanish speakers.
Participants and Methods:
This study samples three demographically diverse (i.e., education, age, and sex) clinical cases and applies regression-based and stratified norms to raw scores to demonstrate the possible differential outcomes when using different reference groups. One example is Ms. Congeniality, a 69-year-old, Spanish and English bilingual woman with 12 years of education who presented for a third revaluation at our clinic due to progressive memory loss. Her prior Spanish language profiles demonstrated impaired confrontation naming and steadily decreasing letter fluency over the past 10 years.
Results:
Her performance on semantic fluency (i.e., animal naming) showed relative stability based on her raw scores (10 in 2012, 11 in 2016, and 12 in 2022). Using the Neuropsi A&M norms, which stratify performance across nine age ranges between ages 6-85 and three education ranges between 0-10+ years, her performance over the past 10 years ranged between the less than 1st percentile to the 9th percentile (1%, 1%, and 9%, respectively). However, using the NP-NUMBRS norms, which use regression-based continuous age (19-60) and education (0-20) predictors of test performance, her scores corresponded to steadily improved performance (8%, 28%, and 86%). Thus, this qualitative comparison demonstrates a likely overcorrection for individuals of advanced age when using norms based on samples that are a poor fit because they lack representation of older adults, as in NP-NUMBRS, and a possible undercorrection when using norms with overly broad education stratifications (e.g., 10-22 years, as in Neuropsi).
Conclusions:
Application of ill-fitting normative standards can have far-reaching implications for interpretation of neuropsychological test results. Moreover, this case series exemplifies the need for higher-order instruction in norm selection, specifically for underserved communities who run the risk of being misdiagnosed. Through case examples, this study underscores the importance of understanding the unique effects of different demographic corrections in the context of limited available normative reference groups. This abstract is the first illustration in a series of papers aimed at facilitating the decision-making process within the framework of socially responsible neuropsychological practice.
It has been established that capturing how an individual draws the Rey Complex Figure Task (RCF) is as important as assessing what is drawn (Rey, 1941, Osterrieth, 1944). Despite the development of multiple systems that have been designed to measure these qualitative characteristics there are still no systematic means to measure adherence to the temporal-spatial heuristic that represents a typical drawing practice in healthy, neurotypical adults (Visser, 1973; Hamby et al, 1993).This study sought to develop a system for scoring temporal-spatial adherence when drawing the figure to provide objective, continuous data.
Participants and Methods:
Fifty-three English-speaking adults (mean age 44.61 yrs, SD 12.48; 44 female) were recruited. Exclusion criteria included vision and hearing impairment not corrected by aids; neurodivergent, neurological or psychiatric diagnosis, cancer or brain injury history. Participants completed the RCF copy phase as part of an extended neuropsychological battery. The RCF drawing process was recorded via video and a ball-point pen that digitally recorded drawing. Order data for the 18 RCF elements (Osterrieth, 1944,Taylor, 1959) was recorded by two scorers and analysed via Principal Component Analysis (PCA) with an equimax rotation to identify elements typically drawn together by a healthy, neurotypical adult. Using scoring methodology adapted from Geary et al (2011), the extent to which participants drew consecutively the member elements of each factor or 'strategy cluster’ was calculated and recorded. Strategy Cluster Scores across the population sample were examined to understand normative performance.
Results:
Order data was examined for interrater reliability via Pearson’s correlation coefficient and was considered good (r2 = 0.78, p < 0.001). PCA identified four factors or 'strategy clusters’ that were statistically robust and accounted for 67.34% of total variation. The strategy clusters were Core Structure (rectangle, diagonal, horizontal, vertical); Triangular Structure (triangle, horizontal in triangle, vertical in triangle, diamond); Internal Left-Hand Side (four horizontal lines, smaller rectangle, horizontal in top-left quad); and Internal Right-Hand Side (five lines, circle, vertical top-right quad, small triangle). The mean RCF Strategy Cluster Score was 6.23 (SD 1.94; possible range: 2.75 to 10). Population data spread indicated that healthy neurotypical adults only partially observed a temporal-spatial heuristic, rather than strict, absolute adherence.
Conclusions:
Four strategy clusters were identified where cluster members were typically drawn consecutively. RCF Cluster Strategy scoring was shown to measure the temporal-spatial heuristic objectively, providing continuous data that lends itself to clinical standardisation. Further, the study demonstrated that whilst healthy, neurotypical adults copy the RCF using a temporal-spatial heuristic, it is only partially adhered to. Traditionally deviation from strict adherence to the four strategy clusters during drawing was deemed to be indicative of cognitive dysregulation, however our findings demonstrate a normal distribution of typical population performance. These findings have important implications for interpreting how RCF drawing strategy informs clinical assessment and diagnosis as both very strict and very weak adherence to a temporal-spatial heuristic can be indicative of atypical function. The study supports this novel scoring system as a fast and reliable means to systematically measure RCF Cluster Strategy that with further validation could be adopted within clinical practice.
Prior studies have presented demographic adjustment as beneficial because it helps equalize, across demographic groups, the percentage of participants (recruited from the general population without prior diagnosis) who fell beneath the test impairment cutoff (e.g., Smith, et al., 2008). This methodology ignores the possibility that group differences in those falling beneath an impairment cutoff could reflect cognitive impairment prevalence differences between demographic groups in the undiagnosed general population. Demographic group differences in cognitive test scores reflect a mixture of two categories of influences: measurement bias (item/test/examiner bias, language/cultural bias, stereotype threat, etc.) and factors which differentially increase the number of low scores in one group by increasing relative risk (RR) for cognitive impairment (biological aging processes, cognitive reserve, social determinants of health [SDoH], etc.). The current simulation study examined how the effect of demographic adjustment on the diagnostic accuracy of a hypothetical test (operationalized as the area under the curve [AUC] in an ROC analysis) varied as the mixture of influences which caused demographic differences in scores were varied.
Participants and Methods:
215,040 samples were randomly generated. Each sample consisted of two demographic groups, with Group 0 always representing the lower scoring group. Across samples, Group 1's baseline risk of impairment and Group 0's relative risk were varied, and these determined the prevalence of cognitive impairment in the groups. Three facets of measurement bias were varied in the simulation: how much lower Group 0's average score was than Group 1's, the degree of non-homogeneity of variance between groups, and how much less reliable the measure was for Group 0. Additional parameters were included and varied to ensure the robustness of findings across a variety of situations. Samples reflected all possible combinations of all varied parameters. For each sample, a baseline AUC was calculated when impairment was regressed on the unadjusted test score. Then, test scores were adjusted for demographic group and difference in adjusted and unadjusted AUC was calculated. This adjusted/unadjusted AUC difference was then regressed on the simulation parameters to quantify their relative influence.
Results:
The more Group 0's average score was reduced by measurement bias, the more improvement in AUC was seen after adjustment (ß = 1.76). Trivial but significant main effects of variance non-homogeneity (ß = .09), increased relative risk (ß = -.08), and reduced reliability (ß = .02) were also found, but more importantly, each of these predictors significantly interacted with Group 0 mean score reduction, such that higher relative risks (ß = -1.22), lower reliability (ß = .36), and higher variance (ß = -.15) in Group 0 compared to Group 1 each reduced the association between Group 0 mean score reduction and improvement in AUC.
Conclusions:
Demographic adjustment only improves AUC when the mean reduction in scores due to measurement bias is sufficiently high while risk for impairment, test reliability and test score variances are sufficiently equivalent among the demographic groups. When this is not the case, demographic adjustment can be counter-productive, reducing the AUC of the test. We conclude by proposing a novel method for adjusting test scores.
The explosion of digital technology in the past decade has led to unprecedented possibilities towards improving cognitive assessment and understanding brain health. Digital technology encompasses using a multitude of devices, such as laptops, smartphones, etc, to collect health-related data. The settings can be varied to include in-clinic, remote/virtual, or a combination of hybrid models for data collection. Data can be collected at a single time point or over a continued period of time. Furthermore, the unique combination of devices used, settings, and methods of collecting digital data can become even more exclusive against the backdrop of the 'purpose’ for conducting the digital study. This symposium, consisting of four abstracts, brings together the unique combination of digital studies with exclusive devices, methodologies, settings, and purposes. The topics range from how smartphone-based assessments can be applied to understand the interaction between day-to-day variability in sleep and cognition, to the use of computerized testing to investigate the associations between cognitive performance and markers of brain pathology (e.g. amyloid and tau status), to understanding cognition from an open-source smartphone application to passively and continuously capture sensor data including global positioning system trajectories, to the development and validation of an online simulated money management credit card task, and to determining the effects of cognitive rehabilitation via digital technology on cognition, neuropsychiatric symptoms, and memory strategies.
Compliance with safety precautions plays significant role in containing pandemic. On a personal level, one critical precaution is to disclose sickness status to people who one comes into direct contact with. Yet, factors governing this personal decision remain uncertain. This study examined age-related differences across adulthood in (i) the likelihood to disclose symptoms of sickness (LDSS) during COVID-19 pandemic, (ii) the level of COVID-19-associated anxiety (CAA), and (iii) the relationship between LDSS and CAA.
Participants and Methods:
Data were obtained from a large-scale survey “Measuring Worldwide COVID-19 Attitudes and Beliefs” (Fetzer et al., 2020). Retained data included sociodemographic characteristics, number of chronic conditions and self-rated quality of health for USA sample (n=11,445) which we stratified by age into five groups (18-29 years old n=2065; 30-39 n=3765; 40-49 n=2463; 50-59 n=1760; 60+ n=1392). Disclosure of sickness was measured with statement: “in the past week if I had exhibited symptoms of sickness, I would have immediately informed the people around me”, where participants self-rated it on the scale from 0–“does not apply at all to me” to 100– “applies very much to me”. We computed LDSS score with thresholds: ≤50–unlikely/uncertain, >50–likely, 100–certain to disclose. CAA symptoms were measured with the following statements which participants self-rated on a scale from 1–“does not apply at all” to 5– “strongly applies”: I am nervous when I think about current circumstances; I am calm and relaxed; I am worried about my health; I am worried about the health of my family members; I am stressed about leaving my house. ANOVA w/Bonferroni post-hoc tests compared LDSS and CAA between the age groups. Multivariate regression (accounting for: gender, education, self-rated health, number of chronic conditions) examined LDSS–CAA relationship.
Results:
Age groups were comparable in gender (∼40% males), education (∼17 years of education), and relationship status (∼65% married/cohabitating). Most participants rated own health as good and reported one chronic condition. LDSS was increasing with aging, F(df=4)=35.552 (p<0.001), with 72% youngest vs. 85% oldest adults indicating certainty about disclosing sickness status. Anxiety about own health was increasing with age, F(df=4)=7.319 (p<0.001), while anxiety about health of family members was decreasing with age, F(df=4)=25.398 (p<0.001). Middle-aged adults showed the highest anxiety related to thinking about the current circumstances, F(df=4)=10.476 (p<0.001), and feeling stressed about leaving own house, F(df=4)=6.368 (p<0.001). LDSS was positively related to anxiety about health of family members and/or feeling stressed about leaving own house in young and middle-aged adults (B=0.042, p=0.001, CI95%=0.017–0.068), but not related to any CAA symptoms in adults aged 60+.
Conclusions:
This study suggests that people can become more likely to disclose sickness status as they age and can be prone to different CAA symptoms across life stages. The results further indicate that distinct CAA symptoms can play a role in LDSS in young and middle adulthood, but may loose significance in older age. Acknowledgement of these diverse mechanisms can inform clinical practice dedicated to individuals with illness anxiety, as well as can help develop age-targeted campaigns that promote compliance with the safety precautions.
Neuromelanin imaging is an emerging biomarker for PD as it captures degeneration of the midbrain, a process which is associated with the motor symptoms of the disease. Currently, it is unknown whether this degeneration also contributes to cognitive dysfunction in PD beyond dysfunction associated with fronto-subcortical systems, as quantitative examination of substantia nigra (SN) degeneration could not be studied until recently.
In the current study, we examine whether neuromelanin signal is associated with broader cognitive dysfunction in PD patients with varying degrees of cognitive impairment: PD with normal cognition (PD-NC), PD with mild cognitive impairment (PD-MCI), and healthy controls (HC).
Participants and Methods:
11 PD-NC, 16 PD-MCI and 14 age and sex-matched healthy controls (HC) participated in the study. PD participants were diagnosed with MCI based on the Movement Disorders Society Task force, Level II assessment (comprehensive assessment). In addition, all participants underwent an MRI scan that included a T1-weighted sequence and a neuromelanin-sensitive (NM-MRI) sequence. Contrast-to-noise-ratio of the substantia nigra pars compacta (SNc) was calculated and a distribution-corrected z-score was used to identify the number of extrema voxels for each individual, suggestive of the number of voxels that have exhibited significant degeneration (extrema_count). An analysis of covariance (ANCOVA) was used to evaluate group differences between HC, PD-NC, and PD-MCI in the extrema_count accounting for age, sex, and education. A multiple regression for each cognitive variable with extrema_count as the dependent variable adjusting for age, sex, and education were conducted.
Results:
A significant main effect of group (F(2, 33) = 33.548 ; p < 0.001) indicated that PD-NC (21.55 ± 12.57) and PD-MCI (43.64 ± 32.84) patients exhibited significantly greater extrema_counts relative to HC (3.36 ± 3.61; both p < 0.001). Regression results indicated that higher extrema_counts were associated with worse cognitive performance across cognitive domains, including working memory (Digit Span Backward; R2 = .357, F(1,20) = 5.295, p = .032), (Hopkins Verbal Learning Test - Revised, Trials 1 to 3; R2 = .432, F(1,20) = 5.819, p = .026).
Conclusions:
PD patients (PD-NC and PD-MCI) exhibited decreased neuromelanin in the SNc relative to healthy controls, confirming the ability of the NM-MRI sequence to differentiate PD from HC. There was no significant difference in SNc neuromelanin levels between PD-NC patients and PD-MCI patients, however, this is likely due to the small sample size. In addition, significant SNc degeneration was associated with worse cognitive performances in tasks associated with working memory and executive functioning. These results warrant further examination of the role of SN in PD patients with differing levels of cognitive impairment.
We previously reported the impact of hormonal changes during menopause on ADHD and associated symptoms. Here we provide findings from an expanded sample limited to those 46 and older.
Participants and Methods:
Information was obtained from a reader survey sponsored by ADDitude Magazine. Responses were received from 3117 women of whom 2653 were 46 or older. Analyses were limited to this older group, since mean age of perimenopause is around 47 in the general population. The final sample ranged in age from 46 to 94 (mean=53) and 85% had been diagnosed with diagnosed with ADHD. Respondents were asked to indicate their age at diagnosis and the impact of 11 different symptoms or associated problems of ADHD at each of 5-time intervals: 0-9 years, 1019 years, 20-39 years, 40-59 years and 60+years. Co-morbidities were also considered.
Results:
Changes in ADHD Symptoms: Sixty-one percent reported that ADHD had the greatest impact on their daily lives between 40 and 59 years of age. The largest group of respondents (43%) were first diagnosed between ages 41 and 50. The reported prevalence of inattention, disorganization, poor time-management, emotional dysregulation, procrastination, impulsivity and poor memory/brain fog increased over the life span. More than half indicated that a sense of overwhelm, brain fog & memory issues, procrastination, poor time-management, inattention/distractibility and disorganization had a 'life altering impact’ during the critical menopausal/perimenopausal window. By contrast, complaints about significant hyperactivity, impulsivity, social struggles and perfectionism remained fairly constant over the lifespan, and were not among the most common complaints (i.e., only endorsed by 25% to 35% of the sample). Interestingly, while 61% reported that ADHD had its greatest impact on daily life between 40-59, only 3% reported the same thing for age 60 and above. Thus, in this expanded sample the first diagnosis of ADHD was most common in adulthood and peaked in the perimenopausal years. ADHD was also again most disruptive during the perimenopausal/menopausal window of time. This shift was most pronounced for symptoms of poor memory/brain fog and 'feeling overwhelmed.' Symptoms either diminished or they adjusted as they moved out of the transition years.
Comorbid Symptoms: Anxiety and depression were most common (73% and 63%, respectively) consistent with the literature. Also elevated, but much less frequent here, were learning, eating and sensory processing disorders (i.e., 10%-13% each). Thus, depression and anxiety may be the most frequent correlates of an ADHD diagnosis, irrespective of age of onset.
Conclusions:
Hormonal change during the climacteric often is associated with worsening of cognitive complaints. Such increased complaints can lead to a first diagnosis of ADHD during this period, as well as a worsening of symptoms in those previously diagnosed. Moreover, this hormonal shift may underlie this diagnosis in a subset of the individuals currently characterized as having adult-onset ADHD. Lessoning of complaints in those ages 60 and above raises questions regarding the underlying mechanisms for this change (e.g., physiologic adaptation, compensation or decreased life demands).
Depression is a common problem among older adults and is further exacerbated by poor treatment response. The vascular depression hypothesis suggests that white matter hyperintensities (WMH) and executive dysfunction are main contributors to treatment non-response in older adults. While a previous meta-analysis has demonstrated the effects of executive dysfunction on treatment response, similar techniques have not been used to address the relationship between WMH and treatment response. Multiple commonly-cited studies demonstrate a relationship between WMH and treatment response, however, the literature on the predictive nature of the relationship is quite inconsistent. Additionally, many studies supporting this relationship are not randomized controlled studies. Critically examining data of well-controlled treatment response outcome studies using meta-analytic methods will allow for an aggregate evaluation of the relationship between WMH burden and treatment response.
Participants and Methods:
A MEDLINE search was conducted to identify regimented antidepressant treatment trials contrasting white matter hyperintensity burden between remitters and non-remitters. Only regimented treatment trials for depressed outpatients aged 50 and older that had a pre-treatment measure of WMH burden and remitter/non-remitter comparison were included. Hedge’s g was calculated for each trial’s treatment effect. A Bayesian meta-analysis was used to estimate an aggregate effect size.
Results:
Eight studies met inclusion criteria. The log odds ratios average was significantly less than zero (.25, SE=.12, p=.019), suggesting that there is a significant effect of WMH hyperintensity burden on antidepressant remission status.
Conclusions:
The purpose of this meta-analysis was to rigorously evaluate randomized controlled trials to determine the relationship between WMH burden and antidepressant treatment response. Findings revealed that WMH burden predicted antidepressant remission, that is, individuals with high WMH burden are less likely to meet remission criteria compared to individuals with low WMH burden. Results suggest that it may be important to consider vascular depression as a distinct treatment target of alternate interventions.
There is increasing interest in examining a general psychopathology factor (p factor) in children and adolescents. In previous work, the relationship between the p factor and cognition in youth has largely focused on general intelligence (IQ) and executive functions (EF). Another cognitive construct, processing speed (PS), is dissociable from these cognitive constructs, but has received less research attention despite being related to many different mental health symptoms. This study aimed to examine the association between a latent processing speed factor and the p factor in youth.
Participants and Methods:
The present sample included 795 youth, ages 11-16 from the Colorado Learning Disability Research Center (CLDRC) sample. Confirmatory factor analyses tested multiple p factor models, with the primary model being a novel second-order, multireporter p factor where caregivers reported on externalizing symptoms (oppositional defiant disorder and conduct disorder modules from the Diagnostic Interview for Children and Adolescents [DICA]; aggression, delinquency, and attention problems subscales from the Child Behavior Checklist; and inattentive and hyperactive/impulsive subscales from the Disruptive Behavior Rating Scale) and youth self-reported on internalizing symptoms (Child Depression Inventory, generalized anxiety module from the DICA, and withdrawn, anxious/depression, and somatic subscales from the Youth Self Report). We then tested the correlation between the p factor and a latent PS factor. The latent PS factor was composed of WISC Symbol Search, WISC Coding, Colorado Perceptual Speed Test, and Identical Pictures Test. Three secondary p factor models were examined for comparison to previous literature, including (1) a bifactor, multi-reporter model, (2) a second-order model with just caregiver-report, and (3) a bifactor model with just caregiver-report.
Results:
There was a significant, negative correlation between the p factor and PS (r=-0.42, p<.001), indicating that slower processing speed is associated with higher general mental health symptoms. This finding was robust across models that used different raters (youth and caregiver-report vs. caregiver-report only) and modeling approaches (second-order vs. bifactor). This association is stronger than previously reported associations with IQ or EF in the p factor literature. Further, in this sample, we found that the association between PS and the p factor was robust to covariation for general cognition, whereas the correlation between general cognition and the p factor was fully accounted for by PS.
Conclusions:
Our findings indicate that PS is related to general psychopathology symptoms, expanding the existing literature relating PS to specific, distinct disorders by showing that PS is related to what is shared across psychopathology. As cognition and psychopathology both undergo significant development across childhood and adolescence, elucidating neurodevelopmental mechanisms that relate to risk for a broad range of symptoms may be critical to informing early intervention and prevention approaches. This research points to processing speed as an important transdiagnostic construct that warrants further attention and exploration across development.
Advances in technologies continue to offer new opportunities for understanding brain functioning and brain-behavior interactions. The clinical application of these technologies continues to require the understanding of both the benefits and limitations of integrating these novel methodologies. This workshop will provide an overview of several emerging and established technologies in neuropsychological assessment and rehabilitation. This will include discussion of portable brain imaging technologies, neuromodulation technologies, virtual reality simulation and various brain-computer interface devices. In addition, we will discuss how clinical application of these novel devices offer opportunities for growing knowledge in new areas of analysis (i.e., machine learning analysis) and interdisciplinary collaborations. Upon conclusion of this course, learners will be able to:
1. Identify 3 technologies that are currently employed in neuropsychological research
2. Assess the strengths and weakness of novel technologies for brain-behavior interface
3. Examine current clinical applications of neuromodulation technologies and portable brain-imaging technologies
Several studies have found a bilingualism advantage on executive functioning tasks like cognitive flexibility, inhibitory control, switching, and working memory in typically developing populations. (Grote et al., 2015, Foy & Mann, 2014). However, some studies have found deficits in inhibitory control and switching for bilingual individuals with Attention Deficit/Hyperactivity Disorder (ADHD) compared to monolingual individuals and control groups (Bialystok et al., 2017, Mor et al., 2015). They suggest that this disadvantage is due to the burden of managing two language systems which perpetuates the executive dysfunction seen in ADHD. The current study aims to examine if there is a bilingualism advantage in other aspects of executive function, including inhibitory control, planning, problem solving, switching, and working memory among children and adults diagnosed with ADHD.
Participants and Methods:
The medical records of 170 patients evaluated in an outpatient neuropsychology clinic from 20182022 were reviewed. Sixty participants diagnosed with ADHD, between the ages of 6 and 46 (61.67% male), comprised the final sample. Forty-one were monolingual and 19 were bilingual or multilingual. Language status was based upon patient or parental report. Outcomes on various direct and indirect measures of executive function were examined.
Results:
Linear regression models, adjusting for age and sex, revealed a significant bilingual advantage on the following measures: Wechsler Intelligence Scale for Children- Fifth Edition (WISC-V) and Wechsler Adult Intelligence Scale - Fourth Edition (WAIS-IV) Digit Span Backwards and Digit Span Sequencing, WISC-V Picture Span, and Behavior Rating Inventory of Executive Function, 2nd Edition (BRIEF-2) Parent-Report Emotion Regulation Index (ERI). There were no significant differences in scores between monolinguals and bilinguals on the following measures: Delis-Kaplan Executive Function System (D-KEFS) Color-Word Interference Inhibition versus Combined Naming Contrast Score and Inhibition/Switching versus Inhibition Contrast Score, D-KEFS Trail Making Number-Letter Switching versus Combined Number Sequencing and Letter Sequencing Contrast Score, A Developmental Neuropsychological Assessment, 2nd Edition (NEPSY-2) Naming versus Inhibition Contrast Score and Switching versus Inhibition Contrast Score, Wisconsin Card Sort Task Learning to Learn Index, BRIEF-2 Parent-rated Behavioral Regulation Index (BRI), Cognitive Regulation Index(CRI), and Global Executive Composite(GEC), BRIEF-2 Self-rated BRI, ERI, CRI, and GEC, or BRIEF Adult Version BRI, Metacognitive Index, and GEC.
Conclusions:
Bilingual status is associated with stronger auditory and visual working memory among people with ADHD, but not with stronger inhibitory control, switching, planning, or problem solving skills. At the same time, there were no significant differences between monolingual and bilingual ADHD patients on BRIEF parent- or self-rated behavioral or cognitive dysregulation. Our results suggest that bilingualism may confer an advantage in some aspects of executive function among a population with weak attention and executive function skills more broadly. Furthermore, we did not find any type of disadvantage for those who are bilingual. Future studies should examine whether lower parental ratings of emotion dysregulation among ADHD patients who are bilingual are due to bilingual children’s better ability to adapt to different situations or cultural differences in parenting practices.
During the COVID-19 pandemic the Oral Trail Making Test (O-TMT) was frequently used as a telehealth-compatible substitute for the written version of the Trail Making Test (W-TMT). There is significant debate among neuropsychologists about the degree to which the O-TMT measures the same cognitive abilities as the W-TMT (i.e., processing speed for part A and set-shifting for part B). Given the continued use of the O-TMT - especially for patients with fine-motor or visual impairments -we examined how O-TMT and W-TMT scores were correlated in patients with movement disorders.
Participants and Methods:
Between April 2021 and July 2022 thirty individuals with movement disorders (n=27 idiopathic Parkinson’s disease [PD]; n=1 drug-induced PD; n=1 progressive supranuclear palsy [PSP]; n=1 possible PSP) completed in-person neuropsychological evaluations at the Emory Brain Health Center in Atlanta, GA. The patients were on average 71.3 years old (SD=7.5 years), had 16 years of education (SD=2.8 years), and the majority were non-Hispanic White (n=27 White; n=3 African American) and male (n=17). In addition to other neuropsychological measures, these patients completed both the O-TMT and the W-TMT. O-TMT and W-TMT administration was counterbalanced across patients and took place thirty-minutes apart. Raw scores (i.e., time in seconds) to complete O-TMT and W-TMT part A and part B, as well as discrepancy scores (part B - part A), were used for statistical analysis; a raw score of 300 seconds was assigned when a participant could not complete that section of the O-TMT or W-TMT. Given the non-normal distribution of the data, Spearman correlations were performed between O-TMT and W-TMT scores.
Results:
Ten patients were unable to perform W-TMT part B. Of these, seven patients could also not perform O-TMT part B. Part A scores on O-TMT and W-TMT were not significantly correlated (rs = 0.27, p = .15). In contrast, part B scores were strongly correlated, such that slower performances on O-TMT part B corresponded with slower performances on W-TMT part B (rs = 0.82, p < .001). Discrepancy scores for the O-TMT and W-TMT were also significantly correlated, such that larger part A and part B discrepancy scores on O-TMT corresponded with larger discrepancy scores on W-TMT (rs = 0.78, p <.001). The pattern of results was replicated when examining these correlations only in patients who could complete all parts of O-TMT and W-TMT (n=19); part A scores of the O-TMT and W-TMT were again not correlated (rs = -0.20, p = .41), whereas the part B scores (rs = 0.54, p = .02) and discrepancy scores (rs = 0.59, p = .008) were significantly correlated.
Conclusions:
Results suggest that an oral version of the Trail Making Test shows promise as an alternative to the written version for assessing set shifting abilities. These findings are limited to patients with movement disorders, and future research with diverse patient populations could help determine whether O-TMT can be generalized to other patient groups. Additionally, future research should examine whether O-TMT scores obtained via virtual testing correspond with W-TMT scores obtained in-person.
Graph products of cyclic groups and Coxeter groups are two families of groups that are defined by labelled graphs. The family of Dyer groups contains these both families and gives us a framework to study these groups in a unified way. This paper focuses on the spherical growth series of a Dyer group D with respect to the standard generating set. We give a recursive formula for the spherical growth series of D in terms of the spherical growth series of standard parabolic subgroups. As an application we obtain the rationality of the spherical growth series of a Dyer group. Furthermore, we show that the spherical growth series of D is closely related to the Euler characteristic of D.
While Parkinson’s disease (PD) is traditionally known as a movement disorder, cognitive decline is one of the most debilitating and common non-motor symptoms. Cognitive profiles of individuals with PD are notably heterogeneous (Goldman et al., 2018). While this variability may arise from the disease itself, other factors might play a role. Greater anticholinergic medication use has been linked to worse cognition in those with PD (Fox et al., 2011, Shah et al., 2013). However, past studies on this topic had small sample sizes, limited ranges of disease duration, and only used cognitive screeners. Thus, this study aimed to examine this question within a large, clinical sample, using a more comprehensive neuropsychological battery. We hypothesized that higher anticholinergic medication usage would relate to worse cognitive performance, particularly memory.
Participants and Methods:
Participants included 491 nondemented individuals with PD (m=64.7, SD=9.04 years old; education m=15.01, SD=2.79; 71.9% male; 94.3% non-Hispanics white) who underwent a comprehensive neuropsychological assessment at the UF Fixel Institute’s movement disorders program. Medications at the time of the neuropsychological evaluation were identified from chart review and scored based on anticholinergic properties using the Magellan Anticholinergic Risk Scale (Rudolph J.L., et al, 2008); each medication was scored from 0 (no load) to 3 (high load). The neuropsychological battery included measures across 5 cognitive domains: (1) executive function (Trails B, Stroop Interference, Letter Fluency), (2) verbal delayed memory (WMS-III Logical Memory and Hopkin’s Verbal Learning Test-Revised delayed recalls), (3) language (Boston Naming Test-II, Animal Fluency), (4) visuospatial skills (Judgment of Line Orientation, Face Recognition Test), and (5) attention/working memory (WAIS-III Digit Span Forward and Backward). The published normative scores for each task were converted into z-scores and averaged into a domain composite. Due to non-normality of Magellan scores, Spearman correlations examined the relationship between each cognitive domain composite score and Magellan scores.
Results:
As predicted, higher Magellan scores were significantly associated with worse memory (r=-0.11, p=0.016), with a small effect size. There were no significant relationships between Magellan scores and the remaining cognitive domains (EF, language, visuospatial, attention).
Conclusions:
We found that greater anticholinergic burden was associated with worse performance on memory, but not other neuropsychological domains, in a large cohort of nondemented individuals with PD who underwent comprehensive assessment. This finding corresponds to previous literature in smaller PD cohorts. Though the effect size was low, this finding highlights the importance of monitoring anticholinergic burden in PD patients in order to minimize detrimental effects of medications on memory function. Future work should examine whether greater anticholinergic burden predicts future progression of memory decline.
Acknowledgement: Supported in part by the NIH, T32-NS082168
Despite associations between hypoglycemia and cognitive performance using cross-sectional and experimental methods (e.g., Insulin clamp studies), few studies have evaluated this relationship in a naturalistic setting. This pilot study utilizes an EMA study design in adults with T1D to examine the impact of hypoglycemia and hyperglycemia, measured using CGM, on cognitive performance, measured via ambulatory assessment.
Participants and Methods:
Twenty adults with T1D (mean age 38.9 years, range 26-67; 55% female; 55% bachelor’s degree or higher; mean HbA1c = 8.3%, range 5.4% - 12.5%), were recruited from the Joslin Diabetes Center at SUNY Upstate Medical University. A blinded Dexcom G6 CGM was worn during everyday activities while completing 3-6 daily EMAs using personal smartphones. EMAs were delivered between 9 am and 9 pm, for 15 days. EMAs included 3 brief cognitive tests developed by testmybrain.org and validated for brief mobile administration (Gradual Onset CPT d-prime, Digit Symbol Matching median reaction time, Multiple Object Tracking percent accuracy) and self-reported momentary negative affect. Day-level average scores were calculated for the cognitive and negative affect measures. Hypoglycemia and hyperglycemia were defined as the percentage of time spent with a sensor glucose value <70 mg/dL or > 180 mg/dL, respectively. Daytime (8 am to 9 pm) and nighttime (9 pm to 8 am) glycemic excursions were calculated separately. Multilevel models estimated the between- and within-person association between the night prior to, or the same day, time spent in hypoglycemia or hyperglycemia and cognitive performance (each cognitive test was modeled separately). To evaluate the effect of between-person differences, person-level variables were calculated as the mean across the study and grand-mean centered. To evaluate the effect of within-person fluctuations, day-level variables were calculated as deviations from these person-level means.
Results:
Within-person fluctuations in nighttime hypoglycemia were associated with daytime processing speed. Specifically, participants who spent a higher percentage of time in hypoglycemia than their average percentage the night prior to assessment performed slower than their average performance on the processing speed test (Digit Symbol Matching median reaction time, b = 94.16, p = 0.042), while same day variation in hypoglycemia was not associated with variation in Digit Symbol Matching performance. This association remained significant (b = 97.46, p = 0.037) after controlling for within-person and between-person effects of negative affect. There were no significant within-person associations between time spent in hyperglycemia and Digit Symbol Matching, nor day/night hypoglycemia or hyperglycemia and Gradual Onset CPT or Multiple Object Tracking.
Conclusions:
Our findings from this EMA study suggest that when individuals with T1D experience more time in hypoglycemia at night (compared to their average), they have slower processing speed the following day, while same day hypoglycemia and hyperglycemia does not similarly impact processing speed performance. These results showcase the power of intensive longitudinal designs using ambulatory cognitive assessment to uncover novel determinants of cognitive variation in real world settings that have direct clinical applications for optimizing cognitive performance. Future research with larger samples is needed to replicate these findings.
Mobile, valid, and engaging cognitive assessments are essential for detecting and tracking change in research participants and patients at risk for Alzheimer’s Disease and Related Dementias (ADRDs). This pilot study aims to determine the feasibility and performance of app-based memory and executive functioning tasks included in the mobile cognitive app performance platform (mCAPP), to remotely detect cognitive changes associated with aging and preclinical Alzheimer’s Disease (AD).
Participants and Methods:
The mCAPP includes three gamified tasks: (1) a memory task that includes learning and matching hidden card pairs and incorporates increasing memory load, pattern separation features (lure vs. non-lure), and spatial memory (2) a stroop-like task (“brick drop”) with speeded word and color identification and response inhibition components and (3) a digit-symbol coding-like task (“space imposters”) with increasing pairs and incidental learning components. The cohort completed the NACC UDS3 neuropsychological battery, selected NIH Toolbox tasks, and additional cognitive testing sensitive to pre-clinical AD, within six months of the mCAPP testing. Participants included thirty-seven older adults (60% female; age=72±4.4, years of education=17±2.5; 67% Caucasian, 30% Black/AA, 3% Multiracial) with normal cognition who are enrolled in the Penn Alzheimer’s Disease Research Center (ADRC) cohort. Participants completed one in-person session and two weeks of at-home testing, with eight scheduled sessions, four in the morning and four in the afternoon. Participants also completed questionnaires and an interview about technology use and wore activity trackers to collect daily step and sleep data and answered questions about mood, anxiety, and fatigue throughout the two weeks of at-home data collection.
Results:
The participants completed an average of 11 at-home sessions, with the majority choosing to play extra sessions. Participants reported high usability ratings for all tasks and the majority rated the task difficulty as acceptable. On all mCAPP tasks, participant performance declined in accuracy and speed with increasing memory load and task complexity. mCAPP tasks correlated significantly with paper and pencil measures and several NIH Toolbox tasks (p<0.05). Examination of performance trends over multiple sessions indicates stabilization of performance within 4-6 sessions on memory mCAPP measures and 5-7 sessions on executive functioning mCAPP measures. Preliminary analyses indicate differences in mCAPP measures and imaging biomarkers.
Conclusions:
Participants were willing and able to complete at-home cognitive testing and most chose to complete more than the assigned sessions. Remote data collection is feasible and well-tolerated. We show preliminary construct validity with the UDS3 and NIH Toolbox and test-retest reliability following a period of task learning and performance improvement and stabilization. This work will help to advance remote detection and monitoring of early cognitive changes associated with preclinical AD. Future directions will include further evaluation of the relationships between mCAPP performance, behavioral states, and neuroimaging biomarkers as well as the utility of detection of practice effects in identifying longitudinal change and risk for ADRD-related cognitive decline.
Preclinical Alzheimer disease (AD) has been associated with subtle deficits in memory, attention, and spatial navigation (Allison et al., 2019; Aschenbrenner et al., 2015; Hedden et al., 2013). There is a need for a widely distributable screening measure for detecting preclinical AD. The goal of this study was to examine whether self- and informant-reported change in the relevant cognitive domains, measured by the Everyday Cognition Scale (ECog; Farias et al., 2008), could represent robust clinical tools sensitive to preclinical AD.
Participants and Methods:
Clinically normal adults aged 56-93 (n=371) and their informants (n=366) completed memory, divided attention, and visuospatial abilities (which assesses spatial navigation) subsections of the ECog. Reliability and validity of these subsections were examined using Cronbach’s alpha and confirmatory factor analyses (CFA). The hypothesized CFA assumed a three-factor structure with each subsection representing a separate latent construct. Receiver operating characteristics (ROC) and area under the curve (AUC) analyses were used to determine the diagnostic accuracy of the ECog subsections in detecting preclinical AD, either defined by cerebrospinal fluid (CSF) ptau181/Aß42 ratio >0.0198 or hippocampal volume in the bottom tertial of the sample. Hierarchical linear regression was used to examine whether ECog subsections predicted continuous AD biomarker burden when controlling for depressive symptomatology, which has been previously associated with subjective cognition (Zlatar et al., 2018). Lastly, we compared the diagnostic accuracy of ECog subsections and neuropsychological composites assessing the same or similar cognitive domains (memory, executive function, and visuospatial ability) in identifying preclinical AD.
Results:
All self- and informant-reported subsections demonstrated appropriate reliability (a range=.71-.89). The three-factor CFA models were an adequate fit to the data and were significantly better than one-factor models (self-report x2(3)=129.511, p<.001; informant-report X2(3)=145.347, p<.001), suggesting that the subsections measured distinct constructs. Self-reported memory (AUC=.582, p=.007) and attention (AUC=.564, p=.036) were significant predictors of preclinical AD defined by CSF ptau181/Aß42 ratio. Self-reported spatial navigation (AUC=.592, p=.022) was a significant predictor of preclinical AD defined by hippocampal volume. Additionally, self-reported attention was a significant predictor of the CSF ptau181/Aß42 ratio (p<.001) and self-reported memory was a significant predictor of hippocampal volume (p=.024) when controlling for depressive symptoms. Informant-reports were not significant predictors of preclinical AD (all ps>.074).
There was a nonsignificant trend for the objectively measured executive function AUC to be higher than for self-reported attention in detecting preclinical AD defined by CSF ptau181/Aß42 ratio and was significantly higher than self-reported attention in detecting preclinical AD defined by hippocampal volume (p=.084 and p<.001, respectively). For memory and spatial navigation/visuospatial domains, the AUCs for self-reported and objective measures did not differ in detecting preclinical AD defined by either CSF ptau181/Aß42 ratio or hippocampal volume (ps>.129).
Conclusions:
Although the self-reported subsections produced significant AUCs, these were not high enough to indicate clinical utility based on existing recommendations (all AUCs<.60; Mandrekar, 2010). Nonetheless, there was evidence that self-reported cognitive change has promise as a screening tool for preclinical AD but there is a need to develop questionnaires with greater sensitivity to subtle cognitive change associated with preclinical AD.