We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
People whose parents had dementia or memory impairment are at higher risk for later-life cognitive impairment themselves. One goal of our research is to identify factors that either increase the risk of or protect against family history of dementia over the life course. External locus of control has been associated with lower cognitive function in middle-aged and older adults. Previous findings have shown that adults racialized as Black have relatively high levels of external locus of control due to inequity and racism. We hypothesized that lower parental memory would be associated with lower offspring memory among Non-Latinx Black and Non-Latinx White (hereafter Black and White, respectively) adults; and associations would be stronger among participants with higher levels of external locus of control.
Participants and Methods:
Participants comprised 594 adults racialized as Black or White (60.3% Black; 62% women; aged 56.1 ± 10.4; 15.3 ± 2.7 years of education) from the Offspring Study who are the adult children of participants in the Washington Heights Inwood Columbia Aging Project (WHICAP). Parental memory was residualized for age (74.3 ± 6.0) and education (13.7 ± 3.1). Self-reported external locus of control was assessed using 8 items from the the perceived control questionnaire. Memory was assessed with the Selective Reminding Test, and a composite of total and delayed recall scores were computed. Linear regression quantified the interaction between parental memory and external locus of control on memory in models stratified by race, and adjusted for age, sex/gender, and number of chronic health diseases.
Results:
Among Black participants (n=358), there were no main effects of parental memory or locus of control on offspring memory. However, lower parental memory was associated with lower offspring memory among Black participants with high levels of external locus of control (standardized estimate=0.36, p=0.02, 95%CI [0.05, 0.67]). Associations were attenuated and non-significant at lower levels of control. Among White participants (n=236), there was a main effect of parental memory on offspring memory, and this association did not vary by levels of external locus of control.
Conclusions:
Poor parental memory, which reflects risk for later-life cognitive impairment and dementia, was associated with lower memory performance among White middle-aged participants. Among Black participants, this association was observed among those with high levels of external locus of control only. Economic and social constraints shape levels of external locus of control and are disproportionately experienced by Black adults. In the face of greater external locus of control, a cascade of psychological and biological stress-related processes may be triggered and make Black adults’ memory function more vulnerable to the detrimental impact of parent-related dementia risk. Longitudinal analyses are needed to clarify temporal associations. Nonetheless, these findings suggest that reducing social and economic inequities disproportionately experienced by Black adults may dampen the effect of intergenerational transmission of dementia risk on cognition.
Cognitive reserve and health-related fitness are associated with favorable cognitive aging, but Black/African American older adults are underrepresented in extant research. Our objective was to explore the relative contributions and predictive value of cognitive reserve and health-related fitness metrics on cognitive performance at baseline and cognitive status at a 4-year follow up in a large sample of Black/African American older adults.
Participants and Methods:
Participants aged 65 years and older from the Health and Retirement Study (HRS) who identified as Black/African American and completed baseline and follow-up interviews (including physical, health, and cognitive assessments) were included in the study. The final sample included 321 Black/African American older adults (mean age = 72.8; sd = 4.8; mean years of education = 12.3; sd = 2.9; mean body mass index (BMI) = 29.1; sd = 5.2; 60.4% identified as female). A cross-sectional analysis of relative importance – a measure of partitioned variance controlling for collinearity and model order – was first used to explore predictor variables and inform the hierarchical model order. Next, hierarchical multiple regression was used to examine cross-sectional relationships between cognitive reserve (years of education), health-related fitness variables (grip strength, lung capacity, gait speed, BMI), and global cognition. Multiple logistic regression was used to examine prospective relationships between predictors and longitudinal cognitive status (maintainers versus decliners). Control variables in all models included age, gender identity, and a chronic disease index score.
Results:
Cross-sectional relative importance analyses identified years of education and gait speed as important predictors of global cognition. The cross-sectional hierarchical regression model explained 33% of variance in baseline global cognition. Education was the strongest predictor of cognitive performance (β = 0.48, p < 0.001). Holding all other variables constant, gait speed was significantly associated with baseline cognitive performance and accounted for a significant additional amount of explained variance (ΔR = 0.01, p = 0.032). In a prospective analysis dividing the sample into cognitive maintainers and decliners, a single additional year of formal education increased chances of being classified as a cognitive maintainer (OR = 1.30, 95% CI = 1.17-1.45). There were no significant relationships between rate of change in health-related fitness and rate of change in cognition.
Conclusions:
Education, a proxy for cognitive reserve, was a robust predictor of global cognition at baseline and was associated with increased odds of maintaining cognitive ability at 4-year follow up in Black/African American older adults. Of the physical performance metrics, gait speed was associated with cognitive performance at baseline. The lack of observed association between other fitness variables and cognition may be attributable to the brief assessment procedures implemented in this large-scale study.
Executive functions (EF) are a primary mediator of both typical and atypical functioning, influencing the progression of psychopathology due to their role in supporting self-monitoring/regulation and top-down control of cognitive processes. According to recent models, EF impairments may contribute to the functional decline of patients with substance use disorder (SUD), exacerbating secondary affective and social symptoms. Despite these potential implications, the tools now commonly used to outline neurocognitive, and specifically EF, impairments in patients with addiction are not tailored to this clinical population, having been developed to assess cognitive or dysexecutive deficits in neurology or geriatric patients. Because of their different clinical focus, such tools are frequently unable to fully delineate the dysfunctional EF profile of addiction patients. We here present the development and validation of a novel specific screening battery for executive disorders in addiction: Battery for Executive Functions in Addiction (BFE-A).
Participants and Methods:
151 SUD patients and 55 control persons were recruited for the validation of the BFE-A battery. The battery consists of two computerized neurocognitive tasks (Stroop and Go/No-go tasks) and five digitalized neuropsychological tests (focus: short/long-term memory, working memory, focused attention, verbal/non-verbal cognitive flexibility). The tests are designed to assess executive control, inhibition mechanisms, and attention bias toward drugs of abuse.
Results:
In tests of verbal memory, focused attention, and cognitive flexibility, as well as in computerized tasks, inferential statistical analyses revealed lower performance in SUD patients compared to control participants, indicating a lack of inhibitory processes and dysfunctional management of cognitive resources. The investigation of Cohen’s d values has revealed that inhibitory control, verbal/nonverbal fluency, and short/long-term memory are the areas with the most significant impairments.
Conclusions:
While the evaluation of EF dysfunctions associated to addiction is a currently underrepresented component of the diagnostic procedure in drug assistance/treatment programs, is also is an essential step for both profiling of patients and design of rehabilitation protocols. Clinical interviews should be complemented by early assessment of cognitive weaknesses and preserved EF skills in order to establishing personalized therapy strategy and perhaps organizing a concurrent phase of cognitive rehabilitation.
Poor cardiovascular health occurs with age and is associated with increased dementia risk, yet its impact on frontotemporal lobar degeneration (FTLD) and autosomal dominant neurodegenerative disease has not been well established. Examining cardiovascular risk in a population with high genetic vulnerability provides an opportunity to assess the impact of lifestyle factors on brain health outcomes. In the current study, we examined whether systemic vascular burden associates with accelerated cognitive and brain aging outcomes in genetic FTLD.
Participants and Methods:
166 adults with autosomal dominant FTLD (C9orf72 n= 97; GRN n= 34; MAPT n= 35; 54% female; Mage = 47.9; Meducation = 15.6 years) enrolled in the Advancing Research and Treatment for Frontotemporal Lobar Degeneration (ARTFL) and Longitudinal Evaluation of Familial Frontotemporal Dementia Longitudinal FTD study (ALLFTD) were included. Participants completed neuroimaging and were screened for cardiovascular risk and functional impairment during a comprehensive neurobehavioral and medical interview. A vascular burden score (VBS) was created by summing vascular risk factors (VRS) [diabetes, hypertension, hyperlipidemia, and sleep apnea] and vascular diseases (VDS) [cerebrovascular disease (e.g., TIA, CVA), cardiac arrhythmia (e.g., atrial fibrillation, pacemaker, defibrillator), coronary artery disease (e.g., myocardial infarction, cardiac bypass, stent), and congestive heart failure] following a previously developed composite (range 0 to 8). We examined the interaction between each vascular health metric (VBS, VDS, VRS) and age (vascular health*age) on clinical severity (CDR plus NACC FTLD-SB), and white matter hyperintensity (WMH) volume outcomes, adjusting for age and sex. Vascular risk, disease, and overall burden scores were examined in separate models.
Results:
There was a statistically significant interaction between total VBS and age on both clinical severity (ß=0.20, p=0.044) and WMH burden (ß=0.20, p=0.032). Mutation carriers with higher vascular burden evidenced worse clinical and WMH outcomes for their age. When breaking down the vascular burden score into (separate) vascular risk (VRS) and vascular disease (VDS) scores, the interaction between age and VRS remained significant only for WMH (ß=0.26, p=0.009), but not clinical severity (ß=0.04, p=0.685). On the other hand, the interaction between VDS and age remained significant only for clinical severity (ß=0.20, p=0.041) but not WMH (ß=0.17, p=0.066).
Conclusions:
Our results demonstrate that systemic vascular burden is associated with an “accelerated aging” pattern on clinical and white matter outcomes in autosomal dominant FTLD. Specifically, mutation carriers with greater vascular burden show poorer neurobehavioral outcomes for their chronological age. When separating vascular risk from disease, risk was associated with higher age-related WMH burden, whereas disease was associated with poorer age-related clinical severity of mutation carriers. This pattern suggests preferential brain-related effects of vascular risk factors, while the functional impact of such factors may be more closely aligned with fulminant vascular disease. Our results suggest cardiovascular health may be an important, potentially modifiable risk factor to help mitigate the cognitive and behavioral disturbances associated with having a pathogenic variant of autosomal dominant FTLD. Future studies should continue to examine the neuropathological processes underlying the impact of cardiovascular risk in FTLD to inform more precise recommendations, particularly as it relates to lifestyle interventions.
Therapeutics targeting frontotemporal dementia (FTD) are entering clinical trials. There are challenges to conducting these studies, including the relative rarity of the disease. Remote assessment tools could increase access to clinical research and pave the way for decentralized clinical trials. We developed the ALLFTD Mobile App, a smartphone application that includes assessments of cognition, speech/language, and motor functioning. The objectives were to determine the feasibility and acceptability of collecting remote smartphone data in a multicenter FTD research study and evaluate the reliability and validity of the smartphone cognitive and motor measures.
Participants and Methods:
A diagnostically mixed sample of 207 participants with FTD or from familial FTD kindreds (CDR®+NACC-FTLD=0 [n=91]; CDR®+NACC-FTLD=0.5 [n=39]; CDR®+NACC-FTLD>1 [n=39]; unknown [n=38]) were asked to remotely complete a battery of tests on their smartphones three times over two weeks. Measures included five executive functioning (EF) tests, an adaptive memory test, and participant experience surveys. A subset completed smartphone tests of balance at home (n=31) and a finger tapping test (FTT) in the clinic (n=11). We analyzed adherence (percentage of available measures that were completed) and user experience. We evaluated Spearman-Brown split-half reliability (100 iterations) using the first available assessment for each participant. We assessed test-retest reliability across all available assessments by estimating intraclass correlation coefficients (ICC). To investigate construct validity, we fit regression models testing the association of the smartphone measures with gold-standard neuropsychological outcomes (UDS3-EF composite [Staffaroni et al., 2021], CVLT3-Brief Form [CVLT3-BF] Immediate Recall, mechanical FTT), measures of disease severity (CDR®+NACC-FTLD Box Score & Progressive Supranuclear Palsy Rating Scale [PSPRS]), and regional gray matter volumes (cognitive tests only).
Results:
Participants completed 70% of tasks. Most reported that the instructions were understandable (93%), considered the time commitment acceptable (97%), and were willing to complete additional assessments (98%). Split-half reliability was excellent for the executive functioning (r’s=0.93-0.99) and good for the memory test (r=0.78). Test-retest reliabilities ranged from acceptable to excellent for cognitive tasks (ICC: 0.70-0.96) and were excellent for the balance (ICC=0.97) and good for FTT (ICC=0.89). Smartphone EF measures were strongly associated with the UDS3-EF composite (ß's=0.6-0.8, all p<.001), and the memory test was strongly correlated with total immediate recall on the CVLT3-BF (ß=0.7, p<.001). Smartphone FTT was associated with mechanical FTT (ß=0.9, p=.02), and greater acceleration on the balance test was associated with more motor features (ß=0.6, p=0.02). Worse performance on all cognitive tests was associated with greater disease severity (ß's=0.5-0.7, all p<.001). Poorer performance on the smartphone EF tasks was associated with smaller frontoparietal/subcortical volume (ß's=0.4-0.6, all p<.015) and worse memory scores with smaller hippocampal volume (ß=0.5, p<.001).
Conclusions:
These results suggest remote digital data collection of cognitive and motor functioning in FTD research is feasible and acceptable. These findings also support the reliability and validity of unsupervised ALLFTD Mobile App cognitive tests and provide preliminary support for the motor measures, although further study in larger samples is required.
Attention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder commonly associated with relative impairments on processing speed, working memory, and/or executive functioning. Anxiety commonly co-occurs with ADHD and may also adversely affect these cognitive functions. Additionally, language status (i.e., monolingualism vs bilingualism) has been shown to affect select cognitive domains across an individual’s lifespan. Yet, few studies have examined the potential effects of the interaction between anxiety and language status on various cognitive domains among people with ADHD. Thus, the current study investigated the effects of the interaction of anxiety and language status on processing speed, working memory, and executive functioning among monolingual and bilingual individuals with ADHD.
Participants and Methods:
The sample comprised of 407 consecutive adult patients diagnosed with ADHD. When asked about their language status, 67% reported to be monolingual (English). The Mean age of individuals was 27.93 (SD = 6.83), mean education of 15.8 years (SD = 2.10), 60% female, racially diverse with 49% Non-Hispanic White, 22% Non-Hispanic Black, 13% Hispanic/Latinx, 9% Asian/Pacific Islander, and 6% other race/ethnicity. Processing speed, working memory, and executive function were measured via the Wechsler Adult Intelligence Scale-Fourth Edition Processing Speed Index, Working Memory Index, and Trail Making Test B, respectively. Anxiety was measured via the Beck Anxiety Inventory (BAI). Three separate linear regression models examined the interaction between anxiety (moderator) and cognition (processing speed, working memory, and executive function) on language. Models included sex/gender and education as covariates with Processing Speed Index and Working Memory Index as the outcomes. Age, sex/gender, and education were used as covariates when Trail Making Test B was the outcome.
Results:
Monolingual and bilingual patients differed in mean age (p < .05) but did not differ in level of anxiety, education, or sex/gender. Overall, anxiety was not associated with processing speed, working memory, and executive function. However, the interaction between anxiety and language status was significantly associated with processing speed (ß = -0.37, p < .05), and executive functioning (ß = 0.82, p < .05). No associations were found when anxiety was added as a moderator for the associations between language and working memory.
Conclusions:
This study found that anxiety moderated the relationship between language status and select cognitive domains (i.e., processing speed and executive functioning) among individuals with ADHD. Specifically, anxiety had a greater association on processing speed and executive functioning performance for bilinguals rather than monolinguals. Future detailed studies are needed to better understand how anxiety modifies the relationship between language and cognitive performance outcomes over time amongst a linguistically diverse sample.
The neuropsychology of babies, toddlers, and young children is a rapidly evolving frontier within our discipline. While there is an inaccurate perception among referral sources that neuropsychological services are not useful before school-age, pediatric neuropsychologists are especially well-suited to identify delay or dysfunction in the years before school entry (Baron and Anderson, 2012). Patterns of neurodevelopmental strengths and weaknesses can be detected very early on in development and used to make inferences about brain-behavior relationships integral for guiding treatment across a number of medical and neurodevelopmental diagnoses. As such, there is a need to foster ongoing clinical interest and expertise and promote the utility of neuropsychological services within this age range. The INS BabIes, ToddlerS, and Young children (BITSY) SIG was recently developed to bring together scientists and clinicians from across the world who conduct research and provide neuropsychological services within this age range to foster collaboration and learning. A priority of the BITSY SIG is not only to promote awareness of the novel needs of this age range, but to consider historical and ongoing disparities in service access, representation in research, and neuropsychological practice. For this inaugural BITSY SIG symposium, four members of the SIG will discuss innovations in infant, toddler, and young child neuropsychological models of care. This topic was developed in direct response to survey results from the first BITSY SIG meeting held during INS 2022, indicating the need for the development and refinement of clinical approaches that incorporate diverse perspectives as well as training opportunities in models of care for very young children. As such, speakers will cover innovations in neuropsychological service models from the prenatal period to formative early years that are inclusive of diverse neurological and neurodevelopmental populations commonly served by neuropsychologists including spina bifida, prematurity, hypoxic-ischemic encephalopathy (HIE), congenital heart disease (CHD), autism (ASD) and attention-deficit/hyperactivity disorder (ADHD). The first talk will highlight the unique role of the neuropsychologist in prenatal and infant consultation, whereas the second talk will focus on the state of the field with regard to the utility of neuroimaging in neonatal populations and the integration of this tool in neuropsychological care. The third talk will discuss early screening and assessment models in a diverse range of conditions within an interdisciplinary setting. The final talk will illustrate a novel neuropsychological intervention designed with and for the empowerment of caregivers for young children impacted by neurological and neurodevelopmental conditions. The unifying theme across the talks is how unplanned discoveries and acute observations of children and families during the critical early years have led to these inclusive care models that prioritize family preferences, values, and culture. Upon conclusion of this course, learners will be able to:
1. Summarize several novel models of neuropsychological care for infants, toddlers, and young children.
2. Recognize ways in which neuropsychologists work within interdisciplinary teams to serve infants, toddlers, and young children and their families.
3. Apply these models of care to your conceptualization of the scope of neuropsychological services available for infants, toddlers, and young children.
In the field of neurocognitive disorders, the perspective offered by new disease-modifying therapy increases the importance of etiological diagnosis. The prescription of cerebrospinal fluid analysis (CSF) and imaging biomarkers is a common practice in the clinic but is often driven more by personal expertise and local availability of diagnostic tools than by evidence of efficacy and cost-effectiveness analysis. This leads to a widely heterogeneous dementia care across Europe. Therefore, a European initiative is currently being conducted to establish a consensus for biomarker-based diagnosis of patients with mild cognitive impairment (MCI) and mild dementia.
Participants and Methods:
Since November 2020, a European multidisciplinary task force of 22 experts from 11 scientific societies have been defining a diagnostic workflow for the efficient use of biomarkers. The Delphi consensus procedure was used to bridge the gaps of incomplete scientific evidence on biomarker prioritization. The project has been in two phases. During Phase 1, we conducted a literature review on the accuracy of imaging, CSF, neurophysiological and blood biomarkers in predicting the clinical progression or in defining the underpinning aetiology of main neurocognitive disorders. Evidence was provided to support the panelists’ decisions. In phase 2, a modified Delphi procedure was implemented, and consensus was reached at a threshold of 70% agreement, or 50%+1 when a question required rediscussion.
Results:
In phase 1, 167 out of 2,200 screened papers provided validated measures of biomarker diagnostic accuracy compared with a gold standard or in predicting progression or conversion of MCI to the dementia stage (i.e., MRI, CSF, FDG-PET, DaT-imaging, amyloid-PET, tau-PET, and myocardial MIBG-scintigraphy and EEG). During phase 2, panelists agreed on the clinical workspace of the workflow, the stage of application, and the patient age window. The workflow is patient-centered and features three levels of assessment (W): W1 defines eleven clinical profiles based on integrated results of neuropsychology, MRI atrophy patterns, and blood tests; W2 describes the first-line biomarkers according to W1 versus clinical suspicion; and W3 suggests the second-line biomarkers when the results of first-line biomarkers are inconsistent with the diagnostic hypothesis, uninformative or inconclusive. CSF biomarkers are first-line in the suspect of Alzheimer’s disease (AD) and when inconsistent neuropsychological and MRI findings hinder a clear diagnostic hypothesis; dopamine SPECT/PET for those leading to suspect Lewy body spectrum. FDG-PET is first-line for the clinical profiles leading to suspect frontotemporal lobar degeneration and motor tauopathies and is followed by CSF biomarkers in the case of atypical metabolic patterns, when an underlying AD etiology is conceivable.
Conclusions:
The workflow will promote consistency in diagnosing neurocognitive disorders across countries and rational use of resources. The initiative has some limitations, mainly linked to the Delphi procedure (e.g., kickoff questions were driven by the moderators, answers are driven by the Delphi panel composition, a subtle phrasing of the questions may drive answers, and 70% threshold for convergence is conventional). However, the diagnostic workflow will be able to help clinicians achieve an early and sustainable etiological diagnosis and enable the use of disease-modifying drugs as soon as they become available.
Cognitive changes following adjuvant treatment for breast cancer (BC) are well documented particularly following chemotherapy. However, limited studies have examined cognitive and/or language functions in chemotherapy-naive women with BC taking tamoxifen (TAM). While there is some compelling evidence TAM affects cognitive and language domains, language has not been studied beyond semantics (i.e., content of language), which is just one aspect of language. Using ambulatory cognitive assessment, we investigated the trajectory of cognitive and language changes during early period of adjuvant endocrine treatment (tamoxifen) in women with BC at two time periods (pre-treatment and two months after treatment begins).
Participants and Methods:
Four women with BC (mean age = 62.25 years, SD = 8.38) and 18 cognitively healthy age-matched controls (mean age = 59.77, SD = 7.45) completed 3 cognitive tasks using smartphones, during a short time period (5 days) and repeated at two time periods. Symbol search, dot memory and color dots tasks were used to measure the cognitive constructs - processing speed and working memory. Response times were recorded in milliseconds. To determine language ability, language samples were collected at two time periods, where the participants described two stories from two wordless picture books and samples were assessed using core lexicon analyses.
Results:
Wilcoxon-signed rank test was computed to identify cognitive and linguistic changes during early period of TAM administration in women with BC at two time periods. No significant within group or between group differences were seen on the cognitive and language tasks at the two time periods, however, a trend for decline in performance was seen in some BC participants across different tasks.
Conclusions:
This is the first study to our knowledge to use ambulatory cognitive assessment method and study discourse-level language function during this early period (pre-treatment and 2 months post-TAM). Findings from the current study advance our understanding of trajectories of cognition and language changes during the initial course of adjuvant endocrine treatment for women with BC with ER+ tumors. Using a measurement-burst design and ambulatory cognitive assessment, we were able to apply better precision measurement to identify distinct cognitive constructs affected by adjuvant endocrine treatment. In addition, insight into changes in discourse ability are impactful for numerous reasons: (1) better understanding of how adjuvant endocrine therapy impacts communication and (2) discernment into language domains that may require early behavioral intervention.
22q11.2 Deletion Syndrome (22q11DS) is a multi-systemic disorder with great clinical heterogeneity. It is the most common microdeletion syndrome and one of the most common genetic causes of developmental delays (e.g., motor/speech). 22q11DS is estimated to occur between 1/2,000-4,000 live births. However, the diverse clinical presentation of 22q11DS and health inequities that impact ethnically, racially, linguistically, and economically marginalized groups, make early identification, diagnosis, and access to beneficial early interventions (e.g., speech/behavioral therapy) even more challenging. Therefore, 22q11DS’ true prevalence may be larger than documented. Challenges associated with diagnosis, as well as neurocognitive, psychiatric, and medical co-morbidities associated with 22q11DS have been reported to affect the quality of life and well-being of people living with 22q11DS and their families. Yet, there is limited longitudinal data on lifelong functional outcomes of this population and the social factors that may shape them. This study aimed to 1) review the extant literature on adaptive functioning across the lifespan in 22q11DS and 2) report on relevant social and structural variables considered in the literature to contextualize adaptive functioning.
Participants and Methods:
A scoping review was conducted between January-June 2022 across six electronic databases: PubMed, Scopus, PsycINFO, Ovid MEDLINE, EBSCO, and Embase. The 'building block’ method was used to identify and design a comprehensive search strategy used to scan publications’ titles, keywords, and abstracts. Citation mining strategy was utilized to identify additional relevant studies. The following inclusion criteria was met: 1) empirical studies conducted in humans, 2) participants with confirmed diagnosis of 22q11DS, 3) evaluation of adaptive functioning, 4) use of at least one standardized measure of adaptive functioning and 5) written or translated into English or Spanish.
Results:
Eighty-four records were initially identified. After deduplication, abstract screening, and full record reviews, a total of twenty-two studies met inclusion criteria for this review. Only eight publications explored adaptive skills as one of their primary outcomes. Clinically significant symptoms of anxiety, withdrawal, anhedonia, and flat affect were associated with worse functional outcomes. Fifteen studies reported between one and three demographic variables (e.g., race/ethnicity, years of education), and only two studies documented mental health treatment status/history. Most studies reported lower adaptive abilities in participants with 22q11DS independent from their cognitive abilities, but the majority of participants scored between the below average range and exceptionally low range on measures of intellectual functioning. Nonetheless, information on contextual variables (e.g., educational/occupational opportunities) that may help to interpret these findings was lacking.
Conclusions:
Methodological differences (e.g., definition and measurement of adaptive functioning), recruitment bias (small, clinic-based identified samples) and lack of information regarding contextual level factors, may be limiting our understanding of the neurocognitive and neuropsychiatric trajectories of people with 22q11DS. It is vital to increase representative samples in epidemiological/clinical studies, as well as research examining the social and structural factors (e.g., access to healthcare, socioeconomic position) that impact functional outcomes in this population to promote public health policies that can improve brain health across the lifespan.
There are numerous adverse health outcomes associated with dementia caregiving, including increased stress and depression. Caregivers often face time-related, socioeconomic, geographic, and pandemic-related barriers to treatment. Thus, implementing mobile health (mHealth) interventions is one way of increasing caregivers’ access to supportive care. The objective of the current study was to collect data from a 3-month feasibility trial of a multicomponent mHealth intervention for dementia caregivers.
Participants and Methods:
40 community-dwelling dementia caregivers were randomized to receive the CARE-Well (Caregiver Assessment, Resources, and Education) App or internet links connected to caregiver education, support, and resources. Caregivers were encouraged to use the App or links at least 4 times per week for 3 months. The App consisted of self-assessments, caregiver and stress reduction education, behavior problem management, calendar reminders, and online social support. Caregivers completed measures of burden, depression, and desire to institutionalize at baseline and post-intervention. Feasibility data included App usage, retention and adherence rates, and treatment satisfaction. Data were analyzed via descriptive statistics.
Results:
Caregivers were mostly white (95%), female (68%), in their mid-60s, (M= 66.38, SD= 10.64), and well-educated (M= 15.52 years, SD= 2.26). Caregivers were mainly spouses (68%) or adult children (30%). Care recipients were diagnosed with mild (60%) or moderate (40%) dementia, with 80% diagnosed as having Alzheimer’s disease. Overall, the study had an 85% retention rate (80% for App group; 90% for links group). 58% of caregivers in the App group were considered high users, using the App >120 minutes over the course of 3 months (M= 362.42, SD= 432.68), and an average of 16.44 days (SD= 15.51). 15% of the sample was non-adherent due to time constraints, disinterest, and/or technology issues. Most participants (75%) using the App were mostly or very satisfied, about 87% would be likely or very likely to seek similar programs in the future, and 93% found the App mostly or very understandable. Groups did not significantly differ on clinical outcomes, although the study was not powered for an efficacy analysis. Within groups analysis revealed significant increases in depressive symptoms at post-treatment for caregivers in both groups.
Conclusions:
This study demonstrated initial feasibility of the CARE-Well App for dementia caregivers. App use was lower than expected, however, high satisfaction, ease of use, and willingness to use similar programs in the future were endorsed. Some caregivers did not complete the intervention due to caregiving responsibilities, general disinterest, and/or technology issues. Although the study was not designed to assess clinical outcomes, we found that both groups reported higher depressive symptoms at post-treatment. This finding was unexpected and might reflect pandemic-related stress, which has been shown to particularly impact dementia caregivers. Future studies should address the efficacy of multicomponent mHealth interventions for dementia caregivers and the effects of increased dose on clinical outcomes. mHealth interventions should be refined to cater to varying levels of technology literacy among caregivers, and further research should aim to better integrate interventions into caregivers’ routines to enhance treatment engagement.
Inflicted traumatic brain injury (TBI) is one of the leading causes of childhood injury and death. Studies have consistently demonstrated worse outcomes for children with inflicted TBIs compared to accidental TBIs. Out of home placement, a known developmental risk factor, is a frequent occurrence in inflicted TBI, which may also contribute to worse outcomes for children. Little is known about what injury, child, and family factors predict out-of-home versus in-home placements. We hypothesize that injury severity, child, and family risk factors will be predictive of out-of-home placement after hospital discharge from an inflicted TBI.
Participants and Methods:
Participants included 175 children with inflicted head injuries ages who received care at a large children’s hospital from 2012 to 2021. 88% of children were alive at discharge and were included in the study. The total sample included 154 children. Ages ranged from 0.2 to 76 months (M = 11.81, SD = 14.50) and 64.9 % were male. Race/Ethnicity distribution was as follows: 66.9% White, 29.9% Latinx or Hispanic, 4.6% Black, 3.3% American Indian or Alaskan, and 22.5% identified another race or ethnicity or identified as multiracial. Measures included injury severity (e.g., days spent in the PICU, post-resuscitation GCS), child (e.g., race/ethnicity, gender), and family factors (e.g., prior history of domestic violence, type of insurance). Individual logistic regressions were run to assess the effect of each injury severity, child, and family factor on placement after hospital discharge.
Results:
Results indicated that having a caregiver with a history of mental health difficulties and/or a history of substance abuse increased the likelihood of an out-of-home placement for the child after an inflicted TBI. Results also demonstrated that the more caregiver psychosocial concerns reported, the higher the risk of an out-of-home placement for the child after discharge from the hospital. Finally, results indicated that having public insurance significantly increased the risk of an out-of-home placement for the child after discharge from the hospital. Post-hoc analyses were conducted to assess the effect of insurance type on out-of-home placement, while controlling for psychosocial concerns. Results indicated that, even when taking total psychosocial concerns into account, having public insurance significantly increased the risk of an out-of-home placement. Logistic regressions were carried out to assess the effect of injury severity, child, and every other family factor (e.g., prior criminal history) on placement after hospital discharge and the overall models were not significant.
Conclusions:
One explanation for these findings is that families with public insurance have less of a social safety net and, thus, are unable to meet the needs of a child with an inflicted TBI. However, we cannot rule out the effect of bias in child welfare practices. Similarly, caregivers with histories of mental health difficulties and substance abuse are likely to have a harder time meeting their child’s needs and providing a stable household, increasing the likelihood of an out-of-home placement. Despite expectations, child and injury severity factors did not play a role in placement decisions after an inflicted TBI, indicating that placement decisions rely more heavily on caregivers’ abilities to meet the child’s needs rather than the child’s medical complexity or the severity of the inflicted TBI.
Return to driving after moderate-to-severe traumatic brain injury (TBI) is often a key step in recovery to regain independence. Survivors are often eager to resume driving and may do so despite having residual cognitive limitations from their injury. A better understanding is needed of how cognition and self-awareness impact survivors’ driving after injury. This study examined the influence of cognition and self-awareness on driving patterns following moderate-to-severe TBI.
Participants and Methods:
Participants were 350 adults aged 19-87 years (mean age = 46 years; 70% male) with history of moderate-to-severe TBI, who resumed driving and were enrolled in the TBI Model System. Cross-sectional data were obtained ranging 1-30 years post injury, including questions on driving practices, the Brief Test of Adult Cognition by Telephone (BTACT), and the Functional Independence Measure (FIM). Self-awareness of cognitive function was measured via the discrepancy between dichotomized ratings (intact versus impaired) of objective cognitive testing (BTACT) and self-reported cognitive function (FIM Cognition subscale). Driving patterns included frequency (driving 'more than once a week’ versus 'once a week or less') and restricted driving behavior (total number of driving situations the survivor described as restricted, ranging 0-15). Regression analyses were conducted to examine the relationships between cognition, self-awareness, and each driving outcome (frequency and restriction), followed by causal mediation analyses to examine the mediating effect of self-awareness. Demographics (age, sex, education), injury characteristics (time since injury, injury severity, history of seizures in past year), and medical/social factors (family income, motor function, urban-rural classification) were included in the models as covariates.
Results:
Thirty-nine percent of survivors had impaired self-awareness, 88% of survivors drove numerous times per week, and the average survivor reported limited driving in 6 situations (out of 15 total situations). Cognition was inversely related to impaired self-awareness (OR = 0.03, p < 0.001) and inversely related to restricted driving behavior (b = -0.79, p < 0.001). Motor function was positively related to impaired self-awareness (OR = 1.28, p < 0.01). Cognition was not related to driving frequency, and self-awareness did not mediate the relationships between cognition and driving patterns (all p > 0.05).
Conclusions:
Most survivors who drive after their injury are driving frequently, but the situations they drive in differ based on their cognitive ability. Impaired self-awareness of cognitive deficits is common after TBI, and self-awareness of cognitive function does not affect driving patterns. Future research needs to focus on how cognition affects nuanced aspects of driving behavior after injury (i.e., types of situations survivors drive in).
Multiple sclerosis (MS) is associated with cognitive and social cognitive deficits. Social cognition impairments may include difficulty with facial expression and emotion recognition. People with MS (PwMS) may also not be aware of their cognitive challenges as demonstrated through discrepant objective and subjective assessments. Research recently conducted in demyelinated mouse models demonstrated that metformin, a drug typically used to treat type II diabetes mellitus (DMII), promotes remyelination and reverses existent social cognition impairment by repressing the monoacylglycerol lipase (MgII) enzyme in the brain. We aim to translate this basic science research and are conducting a pilot study to determine if metformin improves social cognition in PwMS. This project will compare social cognition in those with MS and comorbid DMII who are treated with metformin and those who are not. For the purposes of this interim data analysis, we collapse across both MS groups who are, and who are not, treated with metformin. The current objective is to evaluate the relationship between subjective (i.e., perceived empathy), objective social cognition and information processing speed (IPS) in PwMS and co-morbid diabetes.
Participants and Methods:
Preliminary data on 15 PwMS are included. Participants completed a demographic questionnaire, a cognitive assessment battery, an objective social cognition assessment and self-report questionnaires. These questionnaires assessed subjective social cognition, fatigue, mood, and disability level.
Results:
Preliminary results showed that IPS was positively correlated with the affective empathy domain of social cognition, r = .53, p = .04. Additionally, IPS was positively correlated with objective social cognition, r = .71, p = 003. Follow-up regression analyses demonstrated that IPS predicted objective social cognition, R2 = .71, SE = 3.04, F(1,13) = 13.36, p = .003 and subjective social cognition, R2 = .53, SE = 5.39, F(1,13) = 4.97, p = .04. However, subjective and objective measures of social cognition were not correlated, p > .05 and remained uncorrelated when IPS was controlled for, p > .05.
Conclusions:
A majority of the variance in social perception is explained by IPS, suggesting that how quickly one can think may be a fundamental cognitive process to allow optimal functioning in social situations. While the reason for the relationship between IPS and subjective social cognition is perhaps less apparent, it may reflect a more global cognitive compromise that impacts both cognitive and social processes. This lends support to the Relative Consequence Model that suggests IPS deficits are a fundamental cognitive deficit underlying other more complex cognitive processes. The lack of correlation between subjective perception of empathy and objective social cognition requires further exploration and could potentially be related to some individuals with MS having a diminished ability to judge their own social proficiency. Further analyses with a larger sample will be conducted to assess group differences in social cognitive outcomes and MgII levels between metformin and non-metformin groups. If PwMS who take metformin have better social cognition compared to PwMS who do not take metformin, Mgll levels can be used as a biomarker to guide metformin treatment with the goal of improving social cognition.
Cognitive sequelae are reported in 20-25% of patients following SARS-CoV-2 infection. It remains unclear whether post-infection sequelae cluster into a uniform cognitive syndrome. In this cohort study, we characterized post-COVID neuropsychological outcome clusters, identified factors associated with cluster membership, and examined 6-month recovery trajectories by cluster.
Participants and Methods:
The Mayo Clinic Institutional Review Board approved study protocols. Informed consent was obtained from all participants. Participants (> 18 years old) were recruited from a hospital-wide registry of Mayo Clinic Florida patients who tested positive for SARS-CoV-2 infection from July 2020 to Feb 2022. We abstracted participant health history and COVID-19 disease severity (NIAID score) from the electronic health record and retrieved Area Deprivation Index (ADI) scores as a measure of neighborhood socioeconomic disadvantage. We assessed objective cognitive performance with the CNS Vital-Signs (CNSVS) and subjective neuropsychological symptoms with the Neuropsych Questionnaire-45 (NPQ-45). Results were used as input features in a K-means clustering analysis to derive neurophenotypes. Chi-square and analysis of variance (AnOvA) tests were used to identify clinical and sociodemographic factors associated with cluster membership. Participants repeated the CNS Vital Signs, NPQ-45, as well as the Medical Outcomes Survey (MOS SF-36) and a posttraumatic stress disorder (PTSD) checklist (PCL-C 17) 6 months following initial testing. Repeated-measures ANOVA was used to assess change in neurocognitive performance over time by cluster. Significance was set at P < 0.05.
Results:
Our cohort consisted of 205 participants (171 ambulatory, 34 hospitalized) who completed post-acute outcome assessment a mean of 5.7 (± 3.8) weeks following testing positive for SARS-CoV-2. K-means clustering with elbow method fitting identified three subgroups (see figure). The first cluster (N = 31) is characterized by executive dysfunction, greater socioeconomic disadvantage, and higher rates of obesity. The second cluster (N = 32) is characterized by memory and speed impairment, higher COVID severity, prevalent anosmia (70%), and greater severity of memory complaints, depression, anxiety, and fatigue. The third and largest cluster (N = 142) is absent cognitive impairment. Approximately 39% of participants completed the 6-month outcome assessment (N=79). Regardless of cluster membership, verbal memory, psychomotor speed, and reaction time scores improved over time. Regardless of timepoint, cluster 1 (dysexecutive) showed lower scores on cognitive flexibility and complex attention and cluster 2 (memory-speed impaired) showed lower scores on verbal memory, psychomotor speed, and reaction time. Modeling of cluster by timepoint interactions showed a steeper slope of improvement in complex attention and cognitive flexibility in cluster 1 (dysexecutive). Cluster 3 (normal) showed significant improvement in fatigue while cluster 2 (memory-speed impaired) continued to report moderate-severe fatigue, worse medical outcomes, and higher PTSD symptom severity scores at six months.
Conclusions:
Most participants were cognitively normal or experienced cognitive recovery following SARS-CoV-2 infection. The 25-30% of participants who showed cognitive impairment cluster into two different neurophenotypes. The dysexecutive phenotype was associated with socioeconomic factors and medical comorbidities that are non-specific to COVID-19, while the amnestic phenotype was associated with COVID-19 severity and anosmia. These results suggest that cognitive sequelae following SARS-CoV-2 infection are not uniform. Deficits may be influenced by distinct patient- and disease-specific factors, necessitating differentiated treatment approaches.
A common assumption in clinical neuropsychology is that cerebrovascular risk is adversely associated with executive function, while Alzheimer’s disease (AD) primarily targets episodic memory. The goal of the present study was to determine the cross-sectional and longitudinal validity of these assumptions using validated markers of cerebrovascular and AD burden.
Participants and Methods:
19271 longitudinally-followed participants from the National Alzheimer Coordinating Center (NACC) database (Mean age= 72.25; SD age= 10.42; 58% women; 51.6% CDR= 0, 33.7% CDR= 0.5, 14.7% CDR> 1) were included. Cognitive outcomes were a composite memory score and an executive function composite score (UDS3-EF; Staffaroni et al., 2020). Baseline presence of cerebrovascular disease was indexed by the presence of moderate to severe white matter hyperintensities or lacunar infarct on brain MRI (yes/no), while baseline AD pathology was indexed by the presence of a positive amyloid PET scan or elevated CSF AD biomarkers (yes/no). We used linear mixed effect models to assess the effects of baseline cerebrovascular disease, baseline AD pathology, and their interactions with time in study (years post baseline) controlling for baseline age, sex, education, and baseline MoCA score.
Results:
Baseline cerebrovascular disease was significantly associated with a lower intercept on executive functioning (between-person effect) (p < -0.001, 95% CI -0.37, -0.14) but not memory, while presence of AD biomarkers was associated with a lower memory intercept (p < -0.001, 95% CI -0.52, -0.39) but not executive function. However, only presence of AD pathology at baseline was associated with faster longitudinal decline on both memory and executive functioning over time. Baseline cerebrovascular disease did not independently relate to rate of cognitive decline.
Conclusions:
Consistent with widely held assumptions, our between-person analyses showed that MRI evidence of cerebrovascular disease was associated with worse executive functioning but not memory, while biomarker evidence of AD pathology was associated with worse memory but not executive function. Longitudinally, however, AD is the primary driver of decline in both executive and memory function. These results extend our understanding of how pathology impacts cognition in aging cohorts and highlight the importance of using longitudinal models.
Laterality of motor symptom onset in Parkinson’s disease (PD) is well-known and under-appreciated. It is still unclear though if this laterality might have an influence on other symptoms. Specifically, REM sleep behavior disorder has been shown to be a factor that has a high probability to be lined to PD. In this study we analyzed the longitudinal effect of REM symptomatology on brain lateralization in PD.
Participants and Methods:
We used the baseline and 3-year visit data of 116 participants (67 without REM (PD-non-REM), 49 with REM (PD-REM)) aged 37-81 years from the Parkinson’s Progression Markers Initiative (PPMI) dataset. Statistical 3T MRI data (cortical thicknesses, areas, foldings of cortical Desikan atlas and volumes of subcortical regions) were obtained via FreeSurfer 7.1.1. Lateralization was computed using the formula: (left-right)/(lef +right). Mixed ANOVAs were performed on each region of interest.
Results:
Our findings showed an increased right asymmetry of the paracentral lobule area and of the pars orbitalis area and volume in PD-REM. There was a reduced right asymmetry of the parietal inferior volume at baseline in PD-REM, whereas REM symptomatology had a stable effect at the 3 years visit. At baseline, there was an increased left asymmetry of the thickness of the caudal anterior cingulate, pars orbitalis and pars triangularis regions in PD-REM. After 3 years, there was an increased right asymmetry in those regions. The precentral, frontal superior and transversal temporal gyri showed inverse results: an increased right asymmetry of the thickness at baseline and an increased left asymmetry after 3 years. Finally, REM symptomatology is associated with more significant increases of the left asymmetry of the frontal superior gyrus volume and of the right asymmetry of the supramarginal gyrus volume after 3 years than at baseline.
Conclusions:
These results provide evidence of the modulating effect of the disease progression on the relationship between REM symptoms and brain lateralization in PD.
Probability bias—overestimation of the likelihood that feared social outcomes will occur—is a mechanism targeted for symptom reduction in cognitive behavioral therapy for social anxiety. Safety behaviors (i.e., the conscious and unconscious actions taken to reduce discomfort in feared social situations) are related to cognitive biases and can be manipulated to reduce probability bias. The purpose of this research was to test the hypothesis that scores from a newly developed computer task to measure probability bias, the Outcome Probability Task (OPT; Draheim & Anderson, 2022) would be associated with self-reported safety behaviors during a speech task.
Participants and Methods:
Participants (N=90) included diverse students from a university in a southern, metropolitan area. Individuals reported an average age of 20.74 (SD=3.57) and self-identified as 'Woman’ (69%), 'Man’ (30%), 'Transgender’ (1%), or 'Non-binary/Agender’ (1%), and 'African American or Black’ (52%), 'Asian or Asian American’ (19%), 'White’ (16%), 'Multi-racial’ (7%), 'Hispanic or Latine’ (5%), or 'Middle Eastern’ (1%). Participants viewed social images and imagined themselves in the scenarios, then rated the likelihood that they would be negatively evaluated on a 0-100% scale (higher ratings indicate greater probability bias), gave a speech, and completed a standardized self-report measure of safety behaviors to rate how often they engaged in avoidant safety behaviors during the speech.
Results:
Results from a linear regression indicated that OPT scores (ß=.43) were positively associated with self-reported safety behaviors during a speech task, R2 = .19, F(1, 88) = 20.02, p < .001, 95% CI [0.170, 0.443].
Conclusions:
Negatively biased expectations about fear-relevant social situations—measured by a digital imagery task, the OPT—may contribute to increased engagement in avoidant safety behaviors during a speech task among a convenience sample. Outcome probability bias has previously only been measured through self-report, and the OPT is a promising new measure to multi-modally assess this aspect of social cognition. This task could be used along with imaging techniques to better understand the functional brain activity involved in outcome probability bias. Future studies could explore how activity in the orbitofrontal cortex, which is associated with the anticipation of negative outcomes, relates to responses on the OPT. If there is a connection, this brain region could be an indicator of improvement following intervention, such as cognitive behavioral therapy, for probability biases involved in social anxiety.
Differences between monolinguals and bilinguals have been documented in neuropsychological test performance. Various explanations have been provided to explain why differences exist among these language groups. Hispanic-Americans are individuals born and reside in the United States and have a family background extending to one of the Spanish-speaking countries in Latin America or Spain. Furthermore, Hispanic-American children from Hispanic homes where Spanish is their first language find themselves academically at a disadvantage because their English vocabulary may be lower than English monolinguals. Time perspective (TP) refers to an individual’s orientation towards the past, present, or future. One’s ability to change their TP in order to adapt to changes in cultural context can result in optimal psychological well-being. In one study, researchers reported no relationship existed between ethnicity and TP on cognition. To our knowledge, no study has examined the relationship between language and TP in Hispanic-Americans’ speed attention performance. Therefore, it was predicted that monolinguals would outperform bilinguals on speed attention tasks. Next, it was predicted that monolinguals would report higher scores on future time orientation compared to bilinguals, and bilinguals would report higher scores on past and present time orientation compared to monolinguals. Finally, differences in TP would correlate with speed attention tasks between language groups.
Participants and Methods:
The sample consisted of 119 Hispanic-Americans with a mean age of 19.45 (SD = 1.43). Participants were broken into three groups: English first language monolingual (EFLM), English first language bilingual (EFLB), and English second language bilingual (ESLB). The Comalli Stroop part A and B, Trail Making Test part A, and Symbol Digit Modalities Test written and oral parts were used to evaluate speed attention and the Zimbardo Time Perspective Inventory was used to evaluate time orientation in our sample.
Results:
ANOVAs revealed the EFLM group outperformed the ESLB group on the Comalli Stroop part B, p = .020, np2 = .07. Next, we also found on the Symbol Digit Modalities Test written part the EFLB group outperformed both bilingual groups, p = .025, np2 = .06. Regarding TP, the EFLB group reported higher past negative orientation compared to the EFLM group, p = .033, np2 = .06. Additionally, we found the bilingual groups reported higher present-fatalistic compared to the EFLM group, p = .023, np2 = .06. Pearson’s correlation revealed no significant correlations between TP and speed attention tasks on any of our language groups.
Conclusions:
As expected, the EFLM group outperformed the ESLB group on several speed attention tasks, but the EFLM group only outperformed the EFLB group on the Symbol Digit Modalities Test written part. Additionally, we found that our EFLB sample reported higher orientation of the past and present compared to monolinguals. Our sample level of acculturation could have been a factor influencing the relationship between TP and speed attention. Future studies using larger representative samples should include measures of acculturation and examine if TP influences other cognitive domains (e.g., executive function) in Hispanic-American monolingual and bilingual speakers.