We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The presence of cognitive impairment corresponds with declines in adaptive functioning (Cahn-Weiner, Ready, & Malloy, 2003). Although memory loss is often highlighted as a key deficit in neurodegenerative diseases (Arvanitakis et al., 2018), research indicates that processing speed may be equally important when predicting functional outcomes in atypical cognitive decline (Roye et al., 2022). Additionally, the development of performance-based measures of adaptive functioning offers a quantifiable depiction of functional deficits within a clinical setting. This study investigated the degree to which processing speed explains the relationship between immediate/delayed memory and adaptive functioning in patients diagnosed with mild and major neurocognitive disorders using an objective measure of adaptive functioning.
Participants and Methods:
Participants (N = 115) were selected from a clinical database of neuropsychological evaluations. Included participants were ages 65+ (M = 74.7, SD = 5.15), completed all relevant study measures, and were diagnosed with Mild Neurocognitive Disorder (NCD; N = 69) or Major NCD (N = 46). They were majority white (87.8%) women (53.0%). The Texas Functional Living Scale was used as a performance-based measure of adaptive functioning. The Coding subtest from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS-CD) was used to measure information processing speed. Composite memory measures for Immediate Recall and Delayed Recall were created from subtests of the RBANS (List Learning, Story Memory, and Figure Recall) and the Wechsler Memory Scale-IV (Logical Memory and Visual Reproduction). Multiple regressions were conducted to evaluate the importance of memory and information processing speed in understanding adaptive functioning. Age and years of education were added as covariates in regression analyses.
Results:
Significant correlations (p < .001) were found between adaptive functioning and processing speed (PS; r = .52), immediate memory (IM; r = .43), and delayed memory (DM; r = .32). In a regression model with IM and DM predicting daily functioning, only IM significantly explained daily functioning (rsp = .24, p = .009). A multiple regression revealed daily functioning was significantly and uniquely associated with IM (rsp = .28, p < .001) and PS (rsp = .41, p < .001). This was qualified by a significant interaction effect (rsp = -.29, p = .001), revealing that IM was only associated with adaptive functioning at PS scores lower than the RBANS normative 20th percentile.
Conclusions:
Results suggest that processing speed may be a more sensitive predictor of functional decline than memory among older adults with cognitive disorders. These findings support further investigation into the clinical utility of processing speed tests for predicting functional decline in older adults.
Suicide risk among individuals with psychosis is elevated compared to the general population (e.g., higher rates of suicide attempts [SA] and completions, more severe lethality of means). Importantly, suicidal ideation (SI) seems to be more predictive of near-term and lifetime SAs in people with psychosis than in the general population. Yet, many randomized controlled trials in psychosis have excluded individuals with suicidality. Additionally, research suggests better cognitive and functional abilities are associated with greater suicide risk in psychotic disorders, which is dissimilar to the general population, but studies examining the link between cognition and suicidality are scarce. Because neuropsychological abilities can affect how individuals are able to attend to their environment, solve problems, and inhibit behaviors, further work is needed to consider how they may contribute to suicide risk in people with psychotic disorders. We sought to examine associations between neuropsychological performance and current SI and SA history in a large sample of individuals with psychosis.
Participants and Methods:
176 participants with diagnoses of schizophrenia, schizoaffective disorder, and bipolar disorder with psychotic features completed clinical interviews, a neuropsychological assessment (MATRICS Consensus Cognitive Battery subtests), and psychiatric symptom measures (Positive and Negative Syndrome Scale [PANSS]; Montgomery-Asberg Depression Rating Scale [MADRS]. First, participants were divided into groups based on their current endorsement of SI in the past month on the Colombia Suicide Severity Rating scale (C-SSRS): those with current SI (SI+; n=86) and without current SI (SI-; n=90). We also examined lifetime history of SA (n=114) vs. absence of lifetime SA (n=62). Separate t-tests, chi-square tests, and logistic regressions were used to examine associations between neuropsychological performance and the two dichotomous outcome variables (current SI; history of SA).
Results:
The SI groups did not differ on diagnosis, demographics (e.g., age, gender, race, ethnicity, years of education, premorbid functioning), or on positive and negative symptoms. The SI+ group reported more severe depressive symptoms (t(169)= -5.90, p<.001) and had significantly worse performance on working memory tests than the SI- group (t(173)=2.28, p=.024). Logistic regression revealed that working memory performance uniquely predicted current SI+ group membership above and beyond depressive symptoms (B= -.040; OR= .96; 95% CI [.93, .99]; p= .034). The SA groups did not significantly differ on demographic variables or on positive/negative symptoms, but those with a history of SA had more severe depressive symptoms (t(169)= -2.80, p=.006) and worse performance on tests of working memory (t(173)=2.16, p=.033) and processing speed (t(166)=2.28, p=.024) than did those without a history of SA. Logistic regression demonstrated that after controlling for depressive symptom severity, working memory and processing speed did not predict unique variance in SA history (p=.25).
Conclusions:
Worse working memory performance was associated with SI in the past month in individuals with psychotic disorders. Although our finding is consistent with literature in other psychiatric populations, it conflicts with existing psychosis literature. Thus, a more nuanced examination of how cognition relates to SI/SA in psychosis is warranted to identify and/or develop optimal interventions.
Early identification of individuals at risk for dementia provides an opportunity for risk reduction strategies. Many older adults (30-60%) report specific subjective cognitive complaints, which has also been shown to increase risk for dementia. The purpose of this study is to identify whether there are particular types of complaints that are associated with future: 1) progression from a clinical diagnosis of normal to impairment (either Mild Cognitive impairment or dementia) and 2) longitudinal cognitive decline.
Participants and Methods:
415 cognitively normal older adults were monitored annually for an average of 5 years. Subjective cognitive complaints were measured using the Everyday Cognition Scales (ECog) across multiple cognitive domains (memory, language, visuospatial abilities, planning, organization and divided attention). Cox proportional hazards models were used to assess associations between self-reported ECog items at baseline and progression to impairment. A total of 114 individuals progressed to impairment over an average of 4.9 years (SD=3.4 years, range=0.8-13.8). A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. Mixed effects models with random intercepts and slopes were used to assess associations between baseline ECog items and change in episodic memory or executive function on the Spanish and English Neuropsychological Assessment Scales. Time in years since baseline, the ECog items, and the interaction were key terms of interest in the models. Separate models for both the progression analyses and mixed effects models were fit for each ECog item that included age at the baseline visit, gender, and years of education as covariates.
Results:
More complaints on five of the eight memory items, three of the nine language items, one of the seven visuospatial items, two of the five planning items, and one of the six organization items were associated with progression to impairment (HR=1.25 to 1.59, ps=0.003 to 0.03). No items from the divided attention domain were significantly associated with progression to impairment. In individuals reporting no difficulty on ECog items at the baseline visit there was no significant change over time in episodic memory(p>0.4). More complaints on seven of the eight memory items, two of the nine language items, and three of the seven visuospatial items were associated with more decline in episodic memory (ps=0.003 to 0.04). No items from the planning, organization, or divided attention domains were significantly associated with episodic memory decline. Among those reporting no difficulty on ECog items at the baseline visit there was slight decline in executive function (ps=<0.001 to 0.06). More complaints on three of the eight memory items and three of the nine language items were associated with decline in executive function (ps=0.002 to 0.047). No items from the visuospatial, planning, organization, or divided attention domains were significantly associated with decline in executive function.
Conclusions:
These findings suggest that, among cognitively normal older adults at baseline, specific complaints across several cognitive domains are associated with progression to impairment. Complaints in the domains of memory and language are associated with decline in both episodic memory and executive function.
Previous research has found that subjective cognitive decline corresponds with assessed memory impairment and could even be predictive of neurocognitive impairment. The purpose of this study was to investigate whether a single self-report item of subjective cognitive decline corresponds with the results of a performance-based measure of episodic memory.
Participants and Methods:
Older adults (n = 100; age 60-90) were given the single item measure of subjective cognitive decline developed by Verfaille et al. (2018).
Results:
Those who endorsed subjective cognitive decline (n = 68) had lower scores on the CVLT-II long delay free recall than those who did not endorse such a decline (n = 32). Additionally, older adults with a neurocognitive diagnosis believed their memory was becoming worse at a higher proportion than those without.
Conclusions:
While a single item of subjective cognitive decline should not be substituted for a comprehensive evaluation of memory, the results suggest that it may have utility as a screening item.
The Rey Osterreith Complex Figure (ROCF) is a neuropsychological task used to measure visual-motor integration, visual memory, and executive functioning (EF) in autistic youth. The ROCF is a valued clinical tool because it provides an insight into the way an individual approaches and organizes complex visual stimuli. The constructs measured by the ROCF such as planning, organization, and working memory are highly relevant for research in, but the standardized procedures for scoring the ROCF can be challenging to implement in large scale clinical trials due to complex and lengthy scoring rubrics. We present preliminary data on an adaptation to an existing scoring system that provides quantifiable scores, can be implemented with reliability, and reduces scoring time.
Participants and Methods:
Data was taken from two large-scale clinical trials focusing on EF in autistic youth. All participants completed the ROCF following standard administration guidelines. The research team reviewed commonly used scoring systems and determined that the Boston Qualitative Scoring System (BQSS) was the best fit due to its strengths in measuring EF, the process-related variables generated, and the available normative data. Initially, the BQSS full scoring system was used, which resulted in comprehensive scores but was not feasible due to the time required (approximately 1-1.5 hours per figure for research assistants to complete scoring). Then, the BQSS short form was used, which was successful at solving the timing problem, but resulted in greater subjectivity in the scores impacting the team’s ability to become reliable. Independent reliability could not be calculated for this version because of the large number of discrepancies among scorers which included 2 neuropsychologists and 4 research assistants. A novel checklist was then developed that combined aspects of both scoring systems to help promote objectivity and reliability. In combination with this checklist the team created weekly check in meetings where challenging figures could be brought to discuss. Independent reliability was calculated amongst all research assistant team members (n=4) for the short form and novel checklist. Reliability was calculated based on (1) if the drawing qualified for being brought to the whole team and (2) individual scores on the checklist.
Results:
Independent reliability was calculated for 10 figures scored utilizing the novel checklist by a team of 4 trained research assistants. All scorers were able to achieve 80% reliability with a high average (80-86%). Study team members reported that scoring took less time taking on average 30-45 minutes per figure.
Conclusions:
Inter-rater reliability was strong on the checklist the study team created, indicating its potential as a useful adaptation to the BQSS scoring system that reduces time demands, making the tool feasible for use in large-scale clinical research studies with initially positive reliability factors. The checklist was easy to use, required little training and could be completed quickly. Future research should continue to examine the reliability of the checklist and the time it takes to complete. Additionally, the ROCF should be studied more broadly in research and examined as a potential outcome measure for large scale research studies.
“Brain fog” is one of the most common consequences of developing COVID-19. The available research focuses mainly on the decline in overall cognitive performance. Much less papers refers to the evaluation of particular cognitive domains, and when it does, it focuses particularly on attention and memory disorders. The available data on the effects of COVID-19 infection on visuospatial functions is sparse, so the aim of this study was to investigate the level of visuospatial functioning in adults who have a history of COVID-19 infection. It was also intended to explore whether there is a protective effect of vaccination on the cognitive functioning after COVID-19.
Participants and Methods:
The group included sixty volunteers (age: M = 40.12, SD = 16.78; education: M = 12.95 SD = 2.25; sex: M = 20, F = 40) - thirty seven with a history of COVID-19 and twenty three who were never infected with SARS-COV-2. Of those with a history of COVID-19, twenty-four were vaccinated at the time of the disease, and thirteen were not. Subjects from individual groups did not differ demographically. Participants were examined with a set of neuropsychological tests to assess: a) general cognitive functioning - Montreal Cognitive Assessment (MoCA), b) attention - d2 Test of Attention, memory - Rey-Osterieth Complex Figure - delayed recall, and c) visuospatial functions - Rey-Osterieth Complex Figure - copy, Block Design - subtest of WAIS-R and three experimental tasks consisting of: incomplete pictures, rotating puzzles, counting cubes in a 3D tower.
Results:
Subjects who had a history of COVID-19 achieved significantly lower scores in the MoCA test (p = 0.033) compared to those who did not suffer from COVID-19. They also needed more time in mental rotation task (p = 0,04). Statistically significant differences were also found in the d2 Test of Attention GP score (p = 0.001). Moreover, in group of adults who had a history of COVID-19, statistically significant differences were found between the vaccinated and unvaccinated subjects. It turned out that those who were vaccinated during their illness performed significantly better than those who were unvaccinated in the following cognitive domains: attention (d2 Test of Attention) and visuospatial functions (Rey-Osterieth Complex Figure test - copy, Block Design from WAIS-R, as well as experimental trials: incomplete pictures, rotating puzzles, counting cubes).
Conclusions:
Among adults who have been infected with COVID-19, there is a decrease in general cognitive performance, but also in individual cognitive abilities, including visuospatial functions. Vaccination significantly reduces the risk of cognitive impairment.
To effectively diagnose and treat cognitive post-COVID-19 symptoms, it is important to understand objective cognitive difficulties across the range of acute COVID-19 severity. The aim of this meta-analysis is to describe objective neuropsychological test performance in individuals with non-severe (mild/moderate) COVID-19 cases in the post-acute stage of infection (>28 days after initial infection).
Participants and Methods:
This meta-analysis was pre-registered with Prospero (CRD42021293124) and utilized the PRISMA reporting guidelines, with screening conducted by at least two independent reviewers for all aspects of the screening and data extraction process. Inclusion criteria were established before the article search and were as follows: (1) Studies using adult participants with a probable or formal and documented diagnosis of COVID-19 in the post-acute stage of infection; (2) Studies comparing cognitive functioning using objective neuropsychological tests in one or more COVID-19 groups and a comparison group, or one group designs using tests with normative data; (3) Asymptomatic, mild, or moderate cases of COVID-19. Twenty-seven articles (n=18,202) with three types of study designs and three articles with additional longitudinal data met our full criteria.
Results:
Individuals with non-severe initial COVID-19 infection demonstrated worse cognitive performance compared to healthy comparison participants (d=-0.412 [95% CI, -0.718, -0.176)], p=0.001). We used metaregression to examine the relationship between both average age of the sample and time since initial COVID-19 infection (as covariates in two independent models) and effect size in studies with comparison groups. There was no significant effect for age (b=-0.027 [95% CI (0.091, 0.038)], p=0.42). There was a significant effect for time since diagnosis, with a small improvement in cognitive performance for every day following initial acute COVID-19 infection (b=0.011 [95% CI (0.0039, 0.0174)], p=0.002). However, those with mild (non-hospitalized) initial COVID-19 infections performed better than did those who were hospitalized for initial COVID-19 infections (d=0.253 [95% CI (0.372, 0.134)], p<0.001). For studies that used normative data comparisons, there was a small, non-significant effect compared to normative data (d=-0.165 [95% CI (-0.333, 0.003)], p=0.055).
Conclusions:
Individuals who have recovered from non-severe cases of COVID-19 may be at risk for cognitive decline or impairment and may benefit from cognitive health interventions.
The purpose of this study was to explore overall recovery time and post-concussive symptoms (PCSS) of pediatric concussion patients who were referred to a specialty concussion clinic after enduring a protracted recovery (>28 days). This included patients who self-deferred care or received management from another provider until recovery became complicated. It was hypothesized that protracted recovery patients, who initiated care within a specialty concussion clinic, would have similar recovery outcomes as typical acute injury concussion patients (i.e., within 3 weeks).
Participants and Methods:
Retrospective data were gathered from electronic medical records of concussion patients aged 6-19 years. Demographic data were examined based on age, gender, race, concussion history, and comorbid psychiatric diagnosis. Concussion injury data included days from injury to initial clinic visit, total visits, PCSS scores, days from injury to recovery, and days from initiating care with a specialty clinic to recovery. All participants were provided standard return-to-learn and return-to-play protocols, aerobic exercise recommendations, behavioral health recommendations, personalized vestibular/ocular motor rehabilitation exercises, and psychoeducation on the expected recovery trajectory of concussion.
Results:
52 patients were included in this exploratory analysis (Mean age 14.6, SD ±2.7; 57.7% female; 55.7% White, 21.2% Black or African American, 21.2% Hispanic). Two percent of our sample did not disclose their race or ethnicity. Prior concussion history was present in 36.5% of patients and 23.1% had a comorbid psychiatric diagnosis. The patient referral distribution included emergency departments (36%), local pediatricians (26%), neurologists (10%), other concussion clinics (4%), and self-referrals (24%).
Given the nature of our specialty concussion clinic sample, the data was not normally distributed and more likely to be skewed by outliers. As such, the median value and interquartile range were used to describe the results. Regarding recovery variables, the median days to clinic from initial injury was 50.0 (IQR=33.5-75.5) days, the median PCSS score at initial visit was 26.0 (IQR=10.0-53.0), and the median overall recovery time was 81.0 (IQR=57.0-143.3) days.
After initiating care within our specialty concussion clinic, the median recovery time was 21.0 (IQR=14.0-58.0) additional days, the median total visits were 2.0 (IQR=2.0-3.0), and the median PCSS score at follow-up visit was 7.0 (IQR=1-17.3).
Conclusions:
Research has shown that early referral to specialty concussion clinics may reduce recovery time and the risk of protracted recovery. Our results extend these findings to suggest that patients with protracted recovery returned to baseline similarly to those with an acute concussion injury after initiating specialty clinic care. This may be due to the vast number of resources within specialty concussion clinics including tailored return-to-learn and return-to-play protocols, rehabilitation recommendations consistent with research, and home exercises that supplement recovery. Future studies should compare outcomes of protracted recovery patients receiving care from a specialty concussion clinic against those who sought other forms of treatment. Further, evaluating the influence of comorbid factors (e.g., psychiatric and/or concussion history) on pediatric concussion recovery trajectories may be useful for future research.
Novel blood-based biomarkers for Alzheimer's disease (AD) could transform AD diagnosis in the community; however, their interpretation in individuals with medical comorbidities is not well understood. Specifically, kidney function has been shown to influence plasma levels of various brain proteins. This study sought to evaluate the effect of one common marker of kidney function (estimated glomerular filtration rate (eGFR)) on the association between various blood-based biomarkers of AD/neurodegeneration (glial fibrillary acidic protein (GFAP), neurofilament light (NfL), amyloid-b42 (Ab42), total tau) and established CSF biomarkers of AD (Ab42/40 ratio, tau, phosphorylated-tau (p-tau)), neuroimaging markers of AD (AD-signature region cortical thickness), and episodic memory performance.
Participants and Methods:
Vanderbilt Memory and Aging Project participants (n=329, 73±7 years, 40% mild cognitive impairment, 41% female) completed fasting venous blood draw, fasting lumbar puncture, 3T brain MRI, and neuropsychological assessment at study entry and at 18-month, 3-year, and 5-year follow-up visits. Plasma GFAP, Ab42, total tau, and NfL were quantified on the Quanterix single molecule array platform. CSF biomarkers for Ab were quantified using Meso Scale Discovery immunoassays and tau and p-tau were quantified using INNOTEST immunoassays. AD-signature region atrophy was calculated by summing bilateral cortical thickness measurements captured on T1-weighted brain MRI from regions shown to distinguish individuals with AD from normal cognition. Episodic memory functioning was measured using a previously developed composite score. Linear mixed-effects regression models related predictors to each outcome adjusting for age, sex, education, race/ethnicity, apolipoprotein E-e4 status, and cognitive status. Models were repeated with a blood-based biomarker x eGFR x time interaction term with follow-up models stratified by chronic kidney disease (CKD) staging (stage 1/no CKD: eGFR>90 mL/min/1.73m2, stage 2: eGFR=60-89 mL/min/1.73m2; stage 3: eGFR=44-59mL/min/1.73m2 (no participants with higher than stage 3)).
Results:
Cross-sectionally, GFAP was associated with all outcomes (p-values<0.005) and NfL was associated with memory and AD-signature region cortical thickness (p-values<0.05). In predictor x eGFR interaction models, GFAP and NfL interacted with eGFR on AD-signature cortical thickness, (p-values<0.004) and Ab42 interacted with eGFR on tau, p-tau, and memory (p-values<0.03). Tau did not interact with eGFR. Stratified models across predictors showed that associations were stronger in individuals with better renal functioning and no significant associations were found in individuals with stage 3 CKD. Longitudinally, higher GFAP and NfL were associated with memory decline (p-values<0.001). In predictor x eGFR x time interaction models, GFAP and NfL interacted with eGFR on p-tau (p-values<0.04). Other models were nonsignificant. Stratified models showed that associations were significant only in individuals with no CKD/stage 1 CKD and were not significant in participants with stage 2 or 3 CKD.
Conclusions:
In this community-based sample of older adults free of dementia, plasma biomarkers of AD/neurodegeneration were associated with AD-related clinical outcomes both cross-sectionally and longitudinally; however, these associations were modified by renal functioning with no associations in individuals with stage 3 CKD. These results highlight the value of blood-based biomarkers in individuals with healthy renal functioning and suggest caution in interpreting these biomarkers in individuals with mild to moderate CKD.
The prevalence of memory complaints in older adults is between 25 and 50%, with poor memory associated with decreased quality of life and declines in daily functioning. Memory training programs are a method for training older adults on strategies and skills to improve memory performance. We conducted a feasibility study of a virtually-delivered adaptation of an Ecologically-Oriented Neurorehabilitation of Memory (EON-Mem) in improving memory for healthy older adults. The primary purposes of this study included: (1) determine the feasibility of conducting EON-Mem virtually with older adults, (2) determine whether a randomized control trial using EON-Mem in older adults is of value, and (3) determine whether electronic delivery of memory training programs with ecological validity is beneficial for older adults.
Participants and Methods:
Twenty-five older adults 55 years of age and older were recruited for participation in a memory training program. All testing and intervention sessions were completed virtually through the Zoom platform. Measures of emotional functioning (Hospital Anxiety and Demographics Scale, health-related quality of life (Short Form-36) and cognitive functioning (Ecological Memory Simulations and Repeatable Battery for Neuropsychological Status; RBANS) were administered before and following the intervention. Participants attended one virtual treatment session per week, with sessions ranging between 60-90 minutes, for a total of six weeks. Between treatment sessions, participants were asked to complete daily homework assignments that allowed them to apply strategies to real-world situations. A priori, feasibility was set at an 80% completion rate and variables that influenced completion are reported.
Results:
To address questions regarding feasibility (e.g., adherence, attrition, etc.), we calculated descriptive statistics (i.e., count statistics, means, standard deviations, and range) on sample information. Of the 25 participants enrolled in the study, 21 participants completed all steps of the study (84% completion rate) showing the delivery format is feasible. The average age of our sample was 61.7 (SD = 5.9) years and average years of education was 17.06 (SD=2.36). Excluding those who dropped, average completion was 72.76 days (SD=18.65, range=47-124). Across all six weeks, homework completion averaged 66.4% (33/49). There were varying effects of the EON-Mem for the EMS memory outcomes with the greatest proportion showing reliable improvement on the ability to recall names (10 participants [42%]). Regarding the RBANS, the greatest proportion of participants showed reliable improvement on the Story Memory task (i.e., four participants [17%]), but only two (9%) showing reliable change on the total Memory Index score.
Conclusions:
Overall, a virtual administration of EON-Mem in older adults was feasible.
Regarding memory changes, the majority of the sample did not demonstrate reliable improvement in memory which might have been due to a variety of reasons including the fact that our sample had a high level of education and low level of memory impairment. Notably, however, this was a feasibility study, not an intervention study. Therefore, future directions should focus on randomized controlled trials to determine efficacy.
Attention plays a key role in auditory processing of information by shifting cognitive resources to focus on incoming stimuli (Riccio, Cohen, Garrison, & Smith, 2005). Mood symptoms are known to affect the efficiency with which this processing occurs, especially when consolidation of memory is required (Massey, Meares, Batchelor, & Bryant, 2015). Without proper focus on relevant task information, improper encoding occurs, resulting in negatively affected performances. This study examines how depression, anxiety, and stress moderate the relationship between auditory attention and verbal list-learning.
Participants and Methods:
Archival data from 373 adults (Mage= 56.46, SD=17.75; Medu = 15.45, SD=2.2; 54% female; 74% white*) were collected at an outpatient clinic. Race was not available in a small percentage of cases included in analyses. Auditory attention was assessed via the Brief Test of Attention (BTA). Learning was assessed via the California Verbal Learning Test (CVLT-II) total T-Score (Trials 15). Mood was assessed via the Depression Anxiety and Stress Scales (DASS-42). A moderation analysis was conducted utilizing the DASS-42 as the moderator between the relationship of BTA and CVLT-II.
Results:
Block 1 of the hierarchical regression was significant in that BTA contributed significantly toward verbal learning on the CVLT-II (F(1, 378)=30.141, p =<.001 , AR2=.074). The standardized beta weight and p-value for BTA were (ß=.272, p<.001). When DASS variables were introduced into Block 2, the model remained significant F(3, 375)=4.227, p =.006 , AR2=.030). The DASS Anxiety subscale had significant beta weights in the model (ß=-.210 p=.004), whereas Depression and Stress were not significant (ß=.039, p=.563) and (ß=.021, p=.765), respectively.
Conclusions:
The current study examined whether mood symptoms affect the relationship between auditory attention and verbal learning. Present results confirm previous research that auditory attention has a significant impact on verbal learning (Massey, Meares, Batchelor, & Bryant, 2015; Weiser, 2004). Building upon prior research, these results indicate that when accounting for auditory attention, clinicians should be aware of possible confounds of anxiety, which may artificially suppress auditory attention. In some circumstances, a differential diagnosis may require consideration that absent anxiety auditory attention may be within normal range. Continued assessment and evaluation regarding the impact of anxiety is crucial for neuropsychologists when examining performances on verbal learning.
Conduct secondary analyses on longitudinal data to determine if caregiver-reported sleep quantity and sleep problems across early childhood (ages 2 - 5 years) predict their child’s attention and executive functioning at age 8 years.
Participants and Methods:
This study utilized data from the Health Outcomes and Measures of the Environment (HOME) Study. The HOME Study recruited pregnant women from 20032006 within a nine-county area surrounding Cincinnati, OH. Caregivers reported on their child’s sleep patterns when children were roughly 2, 2.5, 3, 4, and 5 years of age. Our analysis included 410 participants from the HOME Study where caregivers reported sleep measures on at least 1 occasion or their child completed an assessment of attention and executive functioning at age 8. At each time point, caregiver report on an adapted version of the Child Sleep Habits Questionnaire (CSHQ) was used to determine: (1) total sleep time (TST; “your child’s usual amount of sleep each day, combining nighttime sleep and naps”) and (2) overall sleep problems (23 items related to difficulties with sleep onset, sleep maintenance, and nocturnal events). Our outcome variables, collected at age 8, included caregiver-report forms and measures of attention and executive functioning. Caregiver report measures included normed scores on the Behavior Rating Inventory of Executive Function, from which we focused on the Behavior Regulation Index (BRIEF BRI) and Metacognition Index (BRIEF MI). Performance based measures included T-scores for Omission and Commission errors on the Conner’s Continuous Performance Test, Second Edition (CPT-2) and Standard Scores on the WISC-IV; Working Memory Index (WMI). We used longitudinal growth curve models of early childhood sleep patterns to predict attention and executive functioning at age 8. Predictive analyses were run with and without key covariates: annual household income, child sex and race. To account for general intellectual functioning, we also included covariates children’s WISC-IV Verbal Comprehension and Perceptual Reasoning Indexes.
Results:
Children in our sample were evenly divided by sex; 60% were White. Sleep problems did not show linear or quadratic change over time, so an intercept-only model was used. Sleep problems did not predict any of our outcome measures at age 8 in unadjusted or covariate-adjusted models. As expected, sleep duration was shorter as children matured, so predictive models examined both intercept and slope. Slope was negatively associated with CPT-2 Commissions (unadjusted p=.047; adjusted p=.013); children who showed the least decline in sleep over time had fewer impulsive errors at age 8. The sleep duration intercept was negatively associated with BRIEF BRI (unadjusted p=.002; adjusted p=.043); children who slept less across early childhood had worse parent-reported behavioral regulation at age 8. Neither sleep duration slope nor intercept significantly predicted any other outcomes at age 8 in unadjusted or covariate-adjusted analyses.
Conclusions:
Total sleep time across early childhood predicts behavior regulation difficulties in later childhood. Inadequate sleep during early childhood may be a marker for or contribute to poor development of a child’s self-regulatory skills.
To identify the relative contributions and importance of modifiable fitness and demographic variables to cognitive performance in a cohort of healthy older adults.
Participants and Methods:
Metrics of modifiable fitness (gait speed, respiratory function, grip strength, and body mass index (BMI)) and cognition (executive function, episodic memory, and processing speed) were assessed in 619 older adults from the Health and Retirement Study 2016 wave (mean age = 74.9, sd = 6.9; mean education = 13.4 years, sd = 2.6; 42% female). General linear models were employed to assess the contribution of modifiable fitness variables in predicting three domains of cognition: executive function, episodic memory, and processing speed. Demographics (age, sex, education, time between appointments, and a chronic disease score) were entered as covariates for each model. Relative importance metrics were computed for all variables in each model using Lindeman, Merenda, and Gold (lmg) analysis, a technique which decomposes a given model’s explained variance to describe the average contribution of each predictor variable, independent of its position in the linear model.
Results:
When all variables were entered into the general linear model, demographic and modifiable fitness variables explained 35%, 24%, and 26% of the variance in executive function, episodic memory, and processing speed, respectively (all three models were significant, p <0.001). Age, education, respiratory function, and walking speed had higher relative importance values (all lmgs > 1.8) compared to BMI, grip strength, and other covariates in all three models (all lmgs < 1.3). Gender was also relatively important in the executive function (lmg = 4.2) and episodic memory models (lmg = 5.0). Of the modifiable fitness variables, walking speed and respiratory function had the greatest lmg values (5.8 and 6.4 respectively) in the executive function model, similar to demographic variables age (lmg = 6.0) and education (lmg = 8.9). When demographic variables were entered as covariates, modifiable fitness variables collectively accounted for an additional 9.7%, 6.3%, and 6.0% variance in the executive function, episodic memory, and processing speed models respectively (all three models were significant, p <0.001).
Conclusions:
Our findings indicate that walking speed and respiratory function are of similar importance compared to “traditional” demographic variables such as age and education in predicting cognitive performance in a cohort of healthy older adults. Moreover, modifiable fitness variables accounted for unique variance in executive function, episodic memory, and processing speed after accounting for age and education. Modifiable fitness variables explained the most unique variance in executive function. These results extend the current literature by demonstrating that modifiable fitness variables, even when assessed with brief and relatively coarse measures of physical performance, may be useful in predicting cognitive function. Moreover, the results highlight the need to assess metrics of cognitive reserve, such as education, as well as modifiable fitness variables and their respective roles in accounting for cognitive performance. The data further suggest that relative contributions of physical performance metrics may vary by cognitive domain in healthy older adults.
In research, and particularly clinical trials, it is important to identify persons at high risk for developing Alzheimer’s Disease (AD), such as those with Mild Cognitive Impairment (MCI). However, not all persons with this diagnosis have a high risk of AD as MCI can be broken down further into amnestic MCI (aMCI), who have a high risk specifically for AD, and non-amnestic MCI (naMCI), who are predominantly at risk for other dementias. People with aMCI largely differ from healthy controls and naMCI on memory tasks as it is the hallmark criteria for an amnestic diagnosis. Given the growing use of the NIH Toolbox Cognition battery in research trials, this project investigated which Toolbox Cognition measures best differentiated aMCI from naMCI and in comparison to persons with normal cognition.
Participants and Methods:
A retrospective data analysis was conducted investigating performance on NIH Toolbox Cognition tasks among 199 participants enrolled in the Michigan Alzheimer’s Disease Research Center. All participants were over age 50 (51-89 years, M=70.64) and had a diagnosis of aMCI (N=74), naMCI (N=24), or Normal Cognition (N=101). Potential demographic differences were investigated using chi-square and ANOVAs. Repeated measure general linear model was used to look at potential group differences in Toolbox Cognition performance, covarying for age which was statistically different in aMCI versus Normal participants. Linear regression was used to determine which cognitive abilities, as measured by the Uniform Data Set-3 (UDS3), might contribute to Toolbox differences noted in naMCI versus aMCI groups.
Results:
As expected, aMCI had lower Toolbox memory scores compared to naMCI (p=0.007) and Normals (p<0.001). Interestingly, naMCI had lower Oral Reading scores than both aMCI (p=0.008) and Normals (p<0.001). There were no other Toolbox performance differences between the MCI groups. 19.4% of the variance in Oral Reading scores was explained by performance on the following UDS3 measures: Benson delayed recall (inverse relationship) and backward digit span and phonemic fluency (positive relationship).
Conclusions:
In this study, Toolbox Picture Sequence Memory and Oral Reading scores differentiated aMCI and naMCI groups. While the difference in memory was expected, it was surprising that the naMCI group performed worse than the aMCI and normal groups on the Toolbox Oral Reading task, a task presumed to reflect Crystalized abilities resistive to cognitive decline. Results suggest that Oral Reading is primarily positively associated with working memory and executive tasks from the UDS3, but negatively associated with visual memory. It is possible that the Oral Reading subtest is sensitive to domains of deficit aside from memory that can best distinguish aMCI from naMCI. A better understanding of the underlying features in the Oral Reading task will assist in better characterizing deficit patterns seen in naMCI, making selection of aMCI participants more effective in clinical trials.
The Latinx population is rapidly aging and growing in the US and is at increased risk for stroke and dementia. We examined whether bilingualism confers cognitive resilience following stroke in a community-based sample of Mexican American (MA) older adults.
Participants and Methods:
Participants included predominantly urban, non-immigrant MAs aged 65+ from the Brain Attack Surveillance in Corpus Christi- Cognitive study. Participants were recruited using a two-stage area probability sample with door-to-door recruitment until the onset of the COVID-19 pandemic; sampling and recruitment were then completed via telephone. Cognition was assessed with the Montreal Cognitive Assessment (MoCA; 30-item in-person, 22-item via telephone) in English or Spanish. Bilingualism was assessed via a questionnaire and degree of bilingualism was calculated (range 0%-100% bilingual). Stroke history was collected via self-report. We harmonized the 22-item to the 30-item MoCA using published equipercentile equating. We conducted a series of regressions with the harmonized MoCA score as the dependent variable, stroke history and degree of bilingualism as independent variables, and age, sex/gender, education, assessment language, assessment mode (in-person vs. phone), and self-reported vascular risk factors (hypertension, diabetes, heart disease) as covariates. We included a stroke history by bilingualism interaction to examine whether bilingualism modifies the association between stroke history and MoCA performance.
Results:
Participants included 841 MA older adults (59% women; age M(SE) = 73.5(0.2); 44% less than high school education). Most (77%) of the sample completed the MoCA in English. 93 of 841 participants reported a history of stroke. In an unadjusted model, degree of bilingualism (b = 3.41, p < .0001) and stroke history (b = -1.98, p = .003) were associated with MoCA performance. In a fully adjusted model, stroke history (b = -1.79, p = .0007) but not bilingualism (b = 0.78, p = .21) was associated with MoCA performance. When an interaction term was added to the fully adjusted model, the interaction between stroke history and bilingualism was not significant (b= -0.47, p = .78).
Conclusions:
Degree of bilingualism does not modify the association between stroke history and MoCA performance in Mexican American older adults. These results should be replicated in samples of validated strokes, more comprehensive bilingualism and cognitive assessments, and in other bilingual populations.
Research has shown significant deficits in cognitive domains and a decline in activities of daily living (ADL) in patients with Alzheimer disease (AD). Patients with Mild Cognitive Impairment (MCI) also experience struggles with ADL; moreover, research documents that many MCI patients' symptoms gradually worsen such that their diagnosis eventually converts to AD. Different cognitive domains (i.e., memory, executive function, attention etc.) impact ADL performance. Commonly used instruments for assessing ADL are subjective measures filled by primary caregivers. Subjective measures are not able to assess actual ADL performance. Thus, performance-based tests, such as the Direct Assessment of Functional Status (DAFS), tests of ADLs are more informative. The purpose of this study is to analyze classification accuracy rates for AD and MCI patients with use of five ADL subscales and overall performance a performance-based ADL test.
Participants and Methods:
As part of a larger study, 61 patients diagnosed with AD and 54 age- and education matched patients diagnosed with MCI were administered the DAFS. All patients were administered the Direct Assessment of Functional Status test. This test assesses orientation to time, communication skills, knowledge of transportation rules, financial abilities, and ability to shop for groceries, as well as basic daily skills such as grooming and eating skills. For the purpose of this study, grooming and eating abilities were not used in the analysis.
Results:
Discriminant functional analysis was performed to assess the classification accuracy rates for AD and MCI patients using their ability to perform various types of ADL tasks on the DAFS. The analysis revealed total DAFS scores and all five subscales significantly classified AD and MCI patients performance (all p values < .01). While performance across the DAFS subscale scores accurately classified MCI at rates ranging from 67% - 90%, the rates of accurate classification was much lower for AD patients (29.5% - 62.3%). Of the subscales, the DAFS Shopping task best discriminated and classified the performance of AD at 62% and MCI at 67%.
Conclusions:
These results indicates that a performance-based ADL test can aid in classification of AD and MCI. The fact that the DAFS shopping subscale which requires learning and memory abilities had the best accuracy rates, is consistent with profound memory deficits found in AD patients. This study further highlights the importance of using observational-based measures to assess ADL in MCI and AD patients.
Treatment for pediatric brain tumors (PBTs) is associated with neurocognitive risk, including declines in IQ, executive function, and visual motor processing. Low grade tumors require less intensive treatment (i.e., focal radiotherapy (RT) or surgical resection alone), and have been associated with more favorable cognitive outcomes. However, these patients remain at risk of cognitive problems, which may present differently depending on tumor location. Executive functioning (EF), in particular, has been broadly associated with both frontal-subcortical networks (supratentorial) and the cerebellum (infratentorial). The current study examined intellectual functioning, executive functioning (set-shifting and inhibition), and visual motor skills in patients who were treated for low-grade tumors located in either the supratentorial or infratentorial region.
Participants and Methods:
Participants were survivors (age 8-18) previously treated with focal proton RT or surgery alone for infratentorial (n=21) or supratentorial (n=34) low grade glioma (83.6%) or low grade glioneuronal tumors (16.4%). Survivors >2.5 years post-treatment completed cognitive testing (WISC-IV/WAIS-IV; D-KEFS Verbal Fluency (VF), Color-Word Interference (CW), Trail Making Test (TM); Beery Visual-Motor Integration). We compared outcomes between infratentorial and supratentorial groups using analysis of covariance (ANCOVA). Demographic and clinical variables were compared using Welch’s t-tests. ANCOVAs were adjusted for age at evaluation, age at treatment, and history of posterior fossa syndrome due to significant or marginally significant differences between groups.
Results:
Tumor groups did not significantly differ with respect to sex (49.0% male), length of follow-up (M 4.4 years), or treatment type (74.5% surgery alone, 25.5% proton RT). Marginally significant group differences were found for age at evaluation (infratentorial M = 12.4y, supratentorial M = 14.1y, p = .054) and age at treatment (infratentorial M = 7.9y, supratentorial M = 9.7y, p =.074). Posterior fossa syndrome only occurred with infratentorial tumors (n=5, p = .003). Adjusting for covariates, the supratentorial group exhibited significantly superior performance on a measure of inhibition and set-shifting (CW Switching Time (t(32) = -2.05, p=.048, n2 =.11). There was a marginal group difference in the same direction on CW Inhibition Time (t(32 = -1.77, p = .086, n2 =.08). On the other hand, the supratentorial group showed significantly lower working memory than the infratentorial group (t(50) = 2.45, p = .018, n2 = .11), and trends toward lower verbal reasoning (t(50)=1.96, p = .056, n2 = .07) and full-scale IQ (t(50)=1.73, p = .090, n2 = .055). No other group differences were identified across intellectual, EF, and visualmotor measures.
Conclusions:
Infratentorial tumor location was associated with weaker switching and inhibition performance, while supratentorial tumor location was associated with lower performance on intellectual measures, particularly working memory. These findings suggest that even with relatively conservative treatment (i.e., focal proton RT or surgery alone), there remains neurocognitive risk in children treated for low-grade brain tumors. Moreover, tumor location may predict distinct patterns of long-term neurocognitive outcomes, depending on which brain networks are involved.
People living with younger onset neurocognitive disorders (YOND) experience significant delays in receiving an accurate diagnosis. Although neuropsychological assessment can help assist in a timely diagnosis of YOND, several barriers limit the accessibility of these services. Utilising teleneuropsychology may assist with the service access gap. This study aimed to investigate whether similar results were found on neuropsychological tests administered using videoconference and in person in a sample of people living with YOND.
Participants and Methods:
Participants with a diagnosis of YOND were recruited from the Royal Melbourne Hospital (RMH) Neuropsychiatry inpatient ward and outpatient clinic, and through community advertising. A randomised counterbalanced cross-over design was used where participants completed 14 tests, across two administration sessions: one in person and one using videoconference. There was a two-week interim between the administration sessions. The videoconference sessions were set up across two laptops using the Healthdirect Video Call platform and Q-Global. Repeated measures t-tests, intraclass correlation coefficients (ICC) and Bland-Altman plots were calculated to compare results across the test administration sessions.
Results:
Thirty participants (Mage = 60.23, SD = 7.05) completed both sessions. Huntington's disease was the most common YOND diagnosis (n = 8), followed by Alzheimer's disease (n = 6), mild cognitive impairment (n = 6) and frontotemporal dementia (n = 4). Preliminary results from the current study indicate no statistically significant differences, and small effect sizes, between the in-person or videoconference sessions. ICC estimates range from .69 to .97 across neuropsychological tests.
Conclusions:
This study provides preliminary evidence that performances are comparable between in-person and videoconferencemediated assessments for most neuropsychological tasks evaluated in people living with YOND. Should further research confirm these preliminary results, findings will support the provision of teleneuropsychology to address the current service gaps experienced by people with YOND.
Children with epilepsy are at greater risk of lower academic achievement than their typically developing peers (Reilly and Neville, 2015). Demographic, social, and neuropsychological factors, such as executive functioning (EF), mediate this relation. While research emphasizes the importance of EF skills for academic achievement among typically developing children (e.g., Best et al., 2011; Spiegel et al., 2021) less is known among children with epilepsy (Ng et al., 2020). The purpose of this study is to examine the influence of EF skills on academic achievement in a nationwide sample of children with epilepsy.
Participants and Methods:
Participants included 427 children with epilepsy (52% male; MAge= 10.71), enrolled in the Pediatric Epilepsy Research Consortium (PERC) Epilepsy Surgery Database who had been referred for surgery and underwent neuropsychological testing. Academic achievement was assessed by performance measures (word reading, reading comprehension, spelling, and calculation and word-based mathematics) and parent-rating measures (Adaptive Behavior Assessment System (ABAS) Functional Academics and Child Behavior Checklist (CBCL) School Performance). EF was assessed by verbal fluency measures, sequencing, and planning measures from the Delis Kaplan Executive Function System (DKEFS), NEPSY, and Tower of London test. Rating-based measures of EF included the 'Attention Problems’ subscale from the CBCL and 'Cognitive Regulation’ index from the Behavior Rating Inventory of Executive Function (BRIEF-2). Partial correlations assessed associations between EF predictors and academic achievement, controlling for fullscale IQ (FSIQ; A composite across intelligence tests). Significant predictors of each academic skill or rating were entered into a two-step regression that included FSIQ, demographics, and seizure variables (age of onset, current medications) in the first step with EF predictors in the second step.
Results:
Although zero-order correlations were significant between EF predictors and academic achievement (.29 < r’s < .63 for performance; -.63 < r’s < -.50 for rating measures), partial correlations controlling for FSIQ showed fewer significant relations. For performance-based EF, only letter fluency (DKEFS Letter Fluency) and cognitive flexibility (DKEFS Trails Condition 4) demonstrated significant associations with performance-based academic achievement (r’s > .29). Regression models for performance-based academic achievement indicated that letter fluency (ß = .22, p = .017) and CBCL attention problems (ß = -.21, p =.002) were significant predictors of sight-word reading. Only letter fluency (ß = .23, p =.006) was significant for math calculation. CBCL Attention Problems were a significant predictor of spelling performance (ß = -.21, p = .009) and reading comprehension (ß = -.18, p =.039). CBCL Attention Problems (ß = -.38, p <.001 for ABAS; ß = -.34, p =.002 for CBCL School) and BRIEF-2 Cognitive Regulation difficulties (ß = -.46, p < .001 for ABAS; ß = -.46, p =.013 for CBCL School) were significant predictors of parent-rated ABAS Functional Academics and CBCL School Performance.
Conclusions:
Among a national pediatric epilepsy dataset, performance-based and ratings-based measures of EF predicted performance academic achievement, whereas only ratings-based EF predicted parent-rated academic achievement, due at least in part to shared method variance. These findings suggest that interventions that increase cognitive regulation, reduce symptoms of attention dysfunction, and promote self-generative, flexible thinking, may promote academic achievement among children with epilepsy.
This longitudinal study investigates whether reading strategies are influenced by the orthographic depth of languages, specifically Spanish or Cantonese, acquired through enrollment in bilingual immersion programs. Spanish shares an alphabet with English and is considered a phonologically transparent language (Sun et al., 2022). Research has shown that second language learners of Cantonese, an opaque language, performed better on orthographic awareness tasks that involve whole-word visual information processing (Wang and Geva, 2003). We hypothesize that students enrolled in a bilingual immersion program will outperform peers in general education (GENED) on selected reading tasks. More specifically, those in Spanish-immersion programs will perform better on English tasks involving phonological processing; whereas those in Cantonese-immersion programs will perform better on single-word/character processing tasks.
Participants and Methods:
Participants (n=102) were native English speakers recruited from the San Francisco Unified School District. Our sample included 42 females and 60 males. Thirty-nine identified as White, 33 Mixed Race, 25 Asian, 4 Latinx, and 1 Black. Thirty-nine children were in GENED, 33 in Spanish immersion programs (Sp), and 30 in Cantonese immersion programs (Cn). Each child was assessed on a core language/behavioral battery at Kindergarten (T1) and 2nd-3rd grade (T2). Time 2 participants were between 7 and 9 years old.
Those that scored at least one standard deviation below the mean (SS=85) on a nonverbal intelligence screener (KBIT-2 Matrices) were excluded to mitigate confounds of intellectual disabilities. Groups' performance in English was compared on English tasks involving phonological processing (CTOPP-2 Blending Words and Elision) and single-word/character information processing tasks (WJ-IV Letter Word Identification and KABC-II Rebus).
Results:
Simple main effects analysis showed that time did have a statistically significant effect on test performance (p <0.001). At T2, analysis revealed a significant impact of school enrollment on Blending Words [F (2, 51.0) = 4.19, p = 0.018]. As predicted, post-hoc analysis revealed the students enrolled in the Spanish-immersion program significantly outperformed those in general education on this task. Across the other three tasks, those enrolled in Spanish and Cantonese immersion programs performed as strong as or better than those in GENED, but the variability was not statistically significant.
Conclusions:
This study uniquely isolated the effects of bilingual education without confounding factors of access to resources of a more heterogeneous socioeconomic sample. Mixed results partially supported our hypotheses: Spanish-immersion participants performed significantly better than those in GENED on one English phonological processing task (Blending Words). Although Cantonese immersion students had a higher mean performance than those in GENED on single-word/-character processing tasks, the variance was not statistically significant. This implies that bilingual education may offer advantages in either reading strategy. According to the literature, characteristics of a language may influence literacy acquisition; thus, subsequent research may continue to examine the effect of learning multiple languages with varying levels of orthographic depth on the development of English reading strategies.