We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A common assumption to maximise cognitive training outcomes is that training tasks should be adaptive, with difficulty adjusted to the individual’s performance. This has only been tested once in adults (von Bastian & Eschen, 2016). We aimed to examine children’s outcomes of working memory training using adaptive, self-select and stepwise approaches to setting the difficulty of training tasks compared to an active control condition.
Participants and Methods:
In a randomised controlled trial (ACTRN 12621000990820), children in Grades 2-5 (7 to 11 years) were allocated to one of four conditions: adaptive working memory training, self-select working memory training, stepwise working memory training, or active control. An experimental intervention embedded in Minecraft was developed for teachers to deliver in the classroom over two weeks (10 x 20-minute sessions). The working memory training comprised two training tasks with processing demands similar to daily activities: backward span with digits and following instructions with objects. The control condition comprised creative building tasks. As part of a larger protocol, children completed at baseline and immediately post-intervention working memory measures similar to the training activities (primary outcome): backward span digits and letters versions, following instructions objects and letters versions. Primary analyses were intention-to-treat. Secondary analyses included only children who completed 10 sessions.
Results:
Of 204 children recruited into the study, 203 were randomised, with 95% retention at post-intervention. 76% of children completed all 10 training sessions. Comparisons between each working memory training condition and the active control on working memory measures were non-significant (f2 = 0.00), with one exception. Children in the self-select condition on average performed 1-point better than the controls on the following instructions objects measure (p = .02, f2 = 0.03). A pattern emerged that the self-select condition performed better on most measures.
Conclusions:
We found little evidence that an adaptive approach to setting the difficulty of training tasks maximises training outcomes for children. Findings suggest that working memory outcomes following training are limited and are not modulated by the approach to setting the difficulty of training tasks. This is consistent with findings from von Bastian & Eschen (2016), who also observed that the self-select condition (and not the adaptive condition) showed a slightly larger change in working memory performance following training than the control. It is helpful for clinicians to be aware that adaptive working memory training programs might not be superior in improving children’s working memory, and the benefits of programs are limited.
Previous research has shown that positive outcomes are associated with receiving a neuropsychological evaluation (NPE). The current project examined hospitalization outcomes following an NPE in a sample of patients who had sustained a traumatic brain injury (TBI). Hospitalization rates were compared between the two years pre- and two years post-evaluation. The role that insurance status plays on these health outcomes was also examined. This project is part of a growing effort to evaluate outcomes of clinical neuropsychological services in order to better characterize the broad health impacts of NPEs.
Participants and Methods:
Participants for the current study come from the Optum® de-identified Electronic Health Record dataset. The final sample included 245 patients who completed at least one NPE and were diagnosed with a TBI, according to ICD codes associated with their healthcare records. Patients were aged 21-87 (M = 51.55, SD = 16.74) with an average Charleston Comorbidity Index of 1.77 (SD = 2.41). The sample consisted of 124 females (50.6%), 121 males (49.4%). The majority of the sample identified as non-Hispanic white (N = 213; 86.9%), while 8.6% identified as another race or ethnicity. Regarding insurance, the most common insurance type was commercial (61.6%), followed by Medicare (13.5%), Medicaid (9.4%), and uninsured (6.5%). Those with unknown insurance status, race, or ethnicity were excluded from analyses of those variables.
Results:
Hospitalization incidence for the sample was significantly lower in the two years following a NPE, X2(1, N = 245) = 26.98, p < .001, compared to the two years prior. The mean number of hospitalizations were also lower following a NPE (t(244) = 4.83, p < .001). Insurance status did not show a significant main effect or interaction on mean number of hospitalizations over time. Regarding demographic variables, there was no significant main effects of race/ethnicity group or interaction between race/ethnicity and hospitalization rate change over time. However, there was a significant interaction between hospitalization rate change over time and gender (F(242) = 4.74, p = 0.030). A significant decrease in hospitalizations over time was seen for males (p < .001), while females showed a trend-level decrease that approached significance (p = .06).
Conclusions:
Consistent with previous research, significant reductions in hospitalization incidence and mean number of hospitalizations were seen following a NPE. This finding did not vary based on insurance status. However, hospitalization outcomes varied as a function of gender. These findings suggest that completing a NPE following a traumatic brain injury may contribute to improved hospitalization outcomes, but it does not appear that this benefit is seen equally for all patients. Insurance status may play a role in accessibility to care and hospitalization outcomes in this population, but that relationship is likely influenced by other factors, including racial identity, gender, and income. Future research is needed to investigate the extent that NPEs impact hospitalization rates in the broader context of insurance, demographic factors, and socioeconomic status.
The Personality Assessment Inventory (PAI; Morey, 1991; 2007) is a 344 item self-report measure of personality, psychopathology, and factors affecting treatment. The PAI short form (PAI-SF) contains the first 160 items of the PAI and is often favoured as a screening tool or brief version to mitigate respondent burden and fatigue. The PAI has been psychometrically validated among numerous populations (Slavin-Mulford et al., 2012), while psychometric research on the PAI-SF is gradually emerging. The psychometric properties of the PAI-SF range from adequate to strong in psychiatric (Sinclair et al., 2009), forensic (Sinclair et al., 2010), outpatient and nonclinical (Ward et al., 2018), and stroke (Udala et al., 2020) samples. To advance research validating the PAI-SF among diverse populations, this project investigated the psychometric comparability between the PAI and the PAI-SF in a neuropsychiatric population. Based on previous literature, it was hypothesized that the PAI-SF would produce congruent results to the PAI in this sample.
Participants and Methods:
For this study, participant files (N=214) were collected retrospectively from short- and long-term residential psychiatric and substance use treatment facilities in Minnesota for patients with neurological and cognitive concerns referred for neuropsychological evaluation. The PAI-SF was scored using the first 160 items from a patient’s long-form PAI protocol. To determine the psychometric comparability of long- and short-forms, paired-samples t-tests, intraclass correlations, and percent agreement in clinical classification between forms were analyzed.
Results:
Analyses of participant data found that intra-class correlations ranged from .87 to .98 for each subscale on the PAI when compared to the PAI-SF, demonstrating good to excellent reliability between forms. Symptoms are considered clinically elevated when they exceed the clinical significance threshold for a subscale (typically a T-score of 70+). Agreement between the PAI and PAI-SF subscales in the classification of clinically elevated scores ranged from 86% to 100%. When forms did not agree, the PAI-SF was more likely to be clinically significant relative to the PAI. A comparison of subscale means between forms was examined by independent samples T-tests with a Bonferroni correction. Results revealed significant differences between the PAI and PAI-SF on one validity scale (Negative Impression Management), three clinical scales (Anxiety; Depression; Antisocial Features), and one treatment scale (Treatment Rejection).
Conclusions:
Results demonstrated that the PAI and PAI-SF have high reliability between forms in a neuropsychiatric population. Although mean scores differed on a small number of subscales between the PAI and PAI-SF, differences did not appear sufficiently large enough to shift clinical classifications, as the two forms performed similarly in their identification of clinically elevated scales. Findings align with previous literature and suggest that the PAI-SF may perform adequately in a neuropsychiatric population if brevity or participant burden is of concern. However, caution is warranted when making clinical decisions with the PAI-SF as more research is needed.
Concurrent electroencephalography (EEG) during neuropsychological assessment offers a promising method to understand realtime neural and cognitive processes during task performance. For example, previous studies using experimental tasks suggest that midline-frontal theta power (MFT) could serve as a measure of mental exertion and subjective difficulty. The RBANS provides an opportunity to examine this issue in neuropsychological assessment, as a widely-used screening battery that was explicitly developed with subtests that vary according to difficulty within its five domains. This study investigated the effects of task difficulty, cognitive domain, and age on elicitation of MFT during rest and RBANS administration.
Participants and Methods:
EEG was recorded during eyes-closed and eyes-open resting periods and RBANS administration in a sample of 45 healthy younger adults (n = 21; mean age = 23.29, SD = 3.27, range = 19-33; 48% female) and older adults (n = 24; mean age = 70.58, SD = 5.77, range = 59-83; 83% female). MFT was defined as the highest peak above the overall power spectrum within 4-8Hz from electrode Fz, and operationalized as a binary variable (present/absent). A multilevel generalized logistic regression model was run to assess the main effects of Age (Younger, Older), Difficulty (Easy, Hard), Domain (Rest, Immediate Memory, Visuospatial/Constructional, Language, Attention, Delayed Memory), and their potential interactions, on the presence of MFT.
Results:
In the full sample, the Coding, Figure Recall, and Picture Naming subtests were numerically most likely to elicit MFT (71.1%, 66.7%, and 62.2%, respectively), whereas Semantic Fluency, Eyes-Closed Rest, and List Recall had the lowest likelihoods (37.7%, 31%, 28.9%). Older adults were also numerically less likely to exhibit MFT (37.50% present) compared to younger adults (62.24% present). An analysis of deviance revealed a significant effect of Age (F(1,43) = 7.22, p = .01) and a significant interaction between Difficulty and Domain (F(5,220) = 4.78, p < .001). Specifically, Hard subtests in the Visuospatial/Constructional (Figure Copy; b = -2.63, p < .05) and Language (Semantic Fluency; b = -2.92, p < .01) Domains were less likely to elicit MFT than the Easy subtests (i.e., Line Orientation and Picture Naming, respectively).
Conclusions:
Results indicated that MFT can be reliably measured during neuropsychological assessment, and varies in relation to both age and task-related factors. Consistent with previous studies, older adults exhibited less MFT than younger adults in general, possibly suggesting a failure to recruit the relevant networks. Further, present findings suggest that the presence of MFT varies not only by the type of task but also by the level of difficulty. Future research with larger samples can clarify whether and how the amount of MFT elicited during specific subtests relates to objective and subjective difficulty. Overall, MFT can reliably be elicited by cognitive tasks and bears further study as a measure of real-time neural expenditure.
Tryptophan is an essential amino acid and precursor to several compounds of neurobiological significance, including serotonin, melatonin, and nicotinamide adenine dinucleotide. However, the tryptophan-kynurenine metabolic pathway exhibits “double-edged sword” effects on neurons with neuroprotective metabolites and neurotoxic intermediates. Given its involvement in neurodegenerative diseases and recent reports of alterations in the pathway in response to obesity, we set out to investigate the potential moderating effect of the kynurenine/tryptophan ratio (KTR) on the relationship between adiposity and verbal memory performance in midlife. Our study is important in providing insight into mechanisms underlying the association between adiposity and cognition through the life course and sheds light on the role of metabolic risk factors before senescence. With the current epidemic of obesity and the expected age-related increase in dementia incidence, even a small association between obesity and cognitive decline may have far-reaching public health implications.
Participants and Methods:
A total of 110 middle-aged adults aged 40-61 years participated in this cross-sectional study. Serum levels of kynurenine and tryptophan, body adiposity measured through bioimpedance, and non-contextual verbal memory performance on the California Verbal Learning Test, Second Edition (CVLT-II) were evaluated. Using factor analysis, the composite score of memory indices from Short Delay Free Recall, Long Delay Free Recall, and Long Delay Recognition tasks were calculated. We used linear regression models with the interaction between KTR and adiposity. Sex, age, years of education, and physical activity were included as covariates, as they predict cognitive performance.
Results:
Higher KTR was associated with greater adiposity (p < 0.01). Linear regression analyses for assessing interaction effects indicated that KTR moderated the relation between adiposity and composite memory score (F(7, 100) = 5.22, p < 0.001, R2 = 0.27). These results were robust across individual memory indices and composite memory scores. These findings remained significant even with adjusting for relevant covariates. Interestingly, the marginal effects of adiposity on composite memory score were estimated to be statistically significant and negative (higher adiposity = poorer memory) only when KTR was low (< 0.03).
Conclusions:
The present study indicates that KTR may influence the association between adiposity and verbal memory in midlife as KTR moderated the relationship between adiposity and composite memory score even after adjusting for relevant covariates. In contrast to the notion that high KTR is related to increases in neurotoxic metabolites such as quinolinic acid, individuals with high adiposity and low KTR exhibited the weakest memory performance. Unfortunately, our study did not include measurements of quinolinic acid or kynurenic acid, which may have neuroprotective and anti-inflammatory properties. Future studies expanding the number of measured KT metabolites could shed light on the interactions between obesity and KTR on memory function in midlife.
Few to no studies have directly compared the relative classification accuracies of the memory-based (Brief Visuospatial Memory Test-Revised Recognition Discrimination [BVMT-R RD] and Rey Auditory Verbal Learning Test Forced Choice [RAVLT FC]) and non-memory based (Reliable Digit Span [RDS] and Stroop Color and Word Test Word Reading trial [SCWT WR]) embedded performance validity tests (PVTs). This study’s main objective was to evaluate their relative classification accuracies head-to-head, as well as examine how their psychometric properties may vary among subgroups with and without genuine memory impairment.
Participants and Methods:
This cross-sectional study included 293 adult patients who were administered the BVMT-R, WAIS-IV Digit Span, RAVLT and SCWT during outpatient neuropsychological evaluation at a Midwestern academic medical center. The overall sample was 58.0% female, 36.2% non-Hispanic White, 41.3% non-Hispanic Black, 15.7% Hispanic, 4.8% Asian/Pacific Islander, and 2.0% other, with a mean age of 45.7 (SD=15.8) and a mean education of 13.9 years (SD=2.8). Three patients had missing data, resulting in a final sample size of 290. Two hundred thirty-three patients (80%) were classified as having valid neurocognitive performance and 57 (20%) as having invalid neurocognitive performance based on performance across four independent, criterion PVTs (i.e., Test of Malingering Memory Trial 1, Word Choice Test, Dot Counting Test, Medical Symptom Validity Test). Of those with valid neurocognitive performance, 76 (48%) patients were considered as having genuine memory impairment through a memory composite band score (T<37 for (RAVLT Delayed Recall T-score + BVMT-R Delay Recall T-score/2).
Results:
The average memory composite band score for valid neurocognitive scores was T = 49.63 as compared to T = 27.57 for genuine memory impairment individuals. Receiver operating characteristic [ROC] curve analyses yielded significant areas under the curve (AUCs=.79-.87) for all four validity indices (p’s < .001). When maintaining acceptable specificity (91%-95%), all validity indices demonstrated acceptable yet varied sensitivities (35%-65%). Among the subgroup with genuine memory impairment, ROC curve analyses yielded significantly lower AUCs (.64-.69) for three validity indices (p’s < .001), except RDS (AUC=.644). At acceptable specificity (88%-93%), they yielded significantly lower sensitivities across indices (19%-39%). In the current sample, RAVLT FC and BVMT-R RD had the largest changes in sensitivities, with 19% and 26% sensitivity/90%-92% specificity at optimal cut-scores of <10 and <2, respectively, for individuals with memory impairment, compared to 65% and 61% sensitivity/94% specificity at optimal cut-scores of <13 and <4, respectively, for those without memory impairment.
Conclusions:
Of the four validity scales, memory-based embedded PVTs yielded higher sensitivities while maintaining acceptable specificity compared to non-memory based embedded PVTs. However, they were also susceptible to the greatest declines in sensitivity among the subgroup with genuine memory impairment. As a result, careful consideration should be given to using memory-based embedded PVTs among individuals with clinically significant memory impairment based on other sources of information (e.g., clinical history, behavioral observation).
Neuropsychiatric symptoms due to Alzheimer’s disease (AD) and mild cognitive impairment (MCI) can decrease quality of life for patients and increase caregiver burden. Better characterization of neuropsychiatric symptoms is needed to identify effective treatment targets. The current investigation leveraged the National Alzheimer’s Coordinating Center (NACC) Uniform Data Set (UDS) to examine the network structure of neuropsychiatric symptoms among symptomatic older adults with cognitive impairment.
Participants and Methods:
The identified sample includes those from the NACC UDS (all versions) with complete data on the Neuropsychiatric Inventory Questionnaire (NPI-Q) at initial visit. The NPI-Q is an informant-based estimation of the presence and severity of neuropsychiatric symptoms (delusions, hallucinations, agitation or aggression, depression or dysphoria, anxiety, elation or euphoria, apathy or indifference, disinhibition, irritability or lability, motor disturbance, nighttime behaviors, appetite and eating problems). The following inclusionary criteria were applied for sample identification: age 50+; cognitive status of MCI or dementia; AD was the primary or contributing cause of observed impairment; and at least one symptom on the NPI-Q was endorsed. Participants were excluded if they endorsed “unknown” or “not available” on any NPI-Q items. The final sample (n = 12,507) consisted of older adults (Mage=73.94, SDage=9.41; 46.2% male, 53.8% female) who predominantly identified as non-Hispanic white (NHW) (74.5% NHW, 10.9% non-Hispanic Black, 8.5% other, 5.8% Hispanic white, .3% Hispanic Black). The majority of the sample met criteria for dementia (77.6% dementia, 22.4% MCI) and AD was the presumed primary etiology in 93.9%.
The eLasso method was used to estimate the binary network, wherein nodes represent NPI-Q variables and edges represent their pairwise dependency after controlling for all other symptom variables in the network. In other words, the network represents the conditional probability of an observed binary variable (e.g., presence/absence of delusions) given all other measured variables (e.g., presence/absence of all other NPI-Q symptoms) (Finnemann et al., 2021; van Borkulo et al., 2014). Strength centrality and expected influence were calculated to determine relative importance of each symptom variable in the network. Network accuracy was examined with methods recommended by Epskamp et al. (2018), including edge-weight accuracy, centrality stability, and difference tests.
Results:
Edge weights and node centrality (CS(cor=.7)=.75) were stable and interpretable. The network (M=.28) consisted of mostly positive edges and some negative edges. The strongest edges linked nodes within symptom domain (e.g., strong positive associations among externalizing symptoms). Disinhibition and agitation/aggression were the most central and influential symptoms in the network, respectively. Depression or dysphoria was the most frequently endorsed symptom, followed by anxiety, apathy or indifference, and irritability or lability.
Conclusions:
Endorsed disinhibition and agitation yielded a higher probability of additional neuropsychiatric symptoms and influenced the activation, persistence, and remission of other neuropsychiatric symptoms within the network. Thus, interventions targeting these symptoms may lead to greater neuropsychiatric symptom improvement overall. Depression or dysphoria, while highly endorsed, was least influential in the network. This may suggest that depression and dysphoria are common, but not central neuropsychiatric features of AD pathology. Future work will compare neuropsychiatric symptom networks across racial and ethnic groups and between MCI and dementia.
Prior work with older adults has shown that participating in a range of physical, social, and cognitive activities provides great benefits, such as improved mood and cognitive functioning. These activities can protect against common cognitive problems associated with aging (e.g., poor working memory and processing speed) and lower the risk of developing dementia, thus supporting the cognitive reserve hypothesis. Cognitive reserve refers to the preservation of an individual’s cognitive abilities over time despite changes in the brain that allows them to be resilient in performing daily and complex tasks (Stern, 2012). Historical factors such as education, life experiences, and occupational complexity, as well as current lifestyle behaviors such as cognitive and social activities may serve as proxies for cognitive reserve. It is not clear whether historical proxies of cognitive reserve (e.g., educational attainment) interact with more proximal lifestyle factors (e.g., recent cognitive stimulation) to impact cognitive functioning. In this study, we examined if education, recent cognitive activity, and their interaction predicted enhanced immediate memory and visual and verbal working memory in middle-aged to older adults.
Participants and Methods:
Participants were 62 middle-aged to older adults (age 45-93; mean age = 65.9 years; 80.6% female; 70.9% Black; ∼75.0% with high school education or higher) recruited from a Louisiana housing facility for seniors with low or fixed incomes and a local community center. Data collection included the CHAMPS Physical Activity Questionnaire for Older Adults, Wechsler Adult Intelligence Scale subtests (Digit Span Forward and Digit Span Backward), and the Size Judgment Span Task. Mixed-effects regression analyses were performed with education (less than high school, high school, college), the CHAMPS cognitive activity composite (Weaver & Jaeggi, 2021), and an education * cognitive activity interaction term as independent variables and cognitive test scores as the outcome variables. All models controlled for age and race/ethnicity.
Results:
Significant education by cognitive activity effects were observed for Digit Span Backward and Size Judgment Span, but not for Digit Span Forward. The interactions reflected a positive association between cognitive activity and cognitive functioning in people with at least a high school education, but not in people with less than a high school education.
Conclusions:
Our results support previous findings that education level and engagement in cognitive activity may serve as protective factors against cognitive decline in later life. The finding that cognitive activity was not associated with better cognitive functioning at lower levels of education suggest that earlier life experiences may moderate the benefit of lifestyle interventions later in life. Future studies should examine whether other lifestyle interventions, such as exercise, are more beneficial for people with less cognitive reserve from earlier life experiences.
People with psychotic disorders often experience neurocognitive deficits, such as neurocognitive impairment (NCI), which can negatively affect their daily activities (e.g., performing independent tasks) and recovery. Because of this, the American Psychology Association advocates integrating neurocognitive testing into routine care for people living with psychotic disorders, especially those in their first episode, to inform treatment and improve clinical outcomes. However, in low-and-middle income countries (LMICs), such as Uganda where the current study took place, administering neurocognitive tests in healthcare settings presents numerous challenges. In Uganda there are few resources (e.g., trained clinical staff, and culturally relevant and normed tests) to routinely offer testing in healthcare settings. NeuroScreen is a brief, highly automated, tablet-based neurocognitive testing tool that can be administered by all levels of healthcare staff and has been translated into indigenous Ugandan languages. To examine the psychometric properties of NeuroScreen, we measured convergent and criterion validity of the NeuroScreen tests by comparing performance on them to performance on a traditional battery of neurocognitive tests widely used to assess neurocognition in people with psychotic disorders, the Matric Consensus Cognitive Battery (MCCB).
Participants and Methods:
Sixty-five patients admitted into Butabika Mental Referral Hospital in Uganda after experiencing a psychotic episode and forty-seven demographically similar control participants completed two neurocognitive test batteries: the MCCB and NeuroScreen. Both batteries include tests measuring the neurocognitive domains of executive functioning, working memory, verbal learning, and processing speed. Prior to completing each battery, patients were medically stabilized and could not exhibit any positive symptoms on the day of testing. On the day of testing, medication dosages were scheduled so that patients would not experience sedative effects while testing. To examine convergent validity, we examined correlations between overall performance on NeuroScreen and the MCCB, as well as tests that measured the same neurocognitive domains. To examine criterion validity, an ROC curve was computed to examine the sensitivity and specificity of NeuroScreen to detect NCI as defined by the MCCB.
Results:
There was a large correlation between overall performance on NeuroScreen and the MCCB battery of tests, r(110) = .65, p < .001. Correlations of various strengths were found among tests measuring the same neurocognitive domains in each battery: executive functioning [r(110) = .56 p <.001], processing speed [r(110) = .44, p <.001)], working memory [r(110) = .29, p<.01], and verbal learning [r(110) = .22, p < .01]. ROC analysis of the ability of NeuroScreen to detect MCCB defined NCI showed an area under curve of .798 and optimal sensitivity and specificity of 83% and 60%, respectively.
Conclusions:
Overall test performance between the NeuroScreen and MCCB test batteries was similar in this sample of Ugandans with and without a psychotic disorder, with the strongest correlations in tests of executive functioning and processing speed. ROC analysis provided criterion validity evidence of NeuroScreen to detect MCCB defined NCI. These results provide support for use of NeuroScreen to assess neurocognitive functioning among patients with psychotic disorders in Uganda, however more work needs to be to determine how well it can be implemented in this setting. Future directions include assessing cultural acceptability of NeuroScreen and generating normative data from a larger population of Ugandan test-takers.
Functional near-infrared spectroscopy (fNIRS) is a non-invasive functional neuroimaging method that takes advantage of the optical properties of hemoglobin to provide an indirect measure of brain activation via task-related relative changes in oxygenated hemoglobin (HbO). Its advantage over fMRI is that fNIRS is portable and can be used while walking and talking. In this study, we used fNIRS to measure brain activity in prefrontal and motor region of interests (ROIs) during single- and dual-task walking, with the goal of identifying neural correlates.
Participants and Methods:
Nineteen healthy young adults [mean age=25.4 (SD=4.6) years; 14 female] engaged in five tasks: standing single-task cognition (serial-3 subtraction); single-task walking at a self-selected comfortable speed on a 24.5m oval-shaped course (overground walking) and on a treadmill; and dual-task cognition+walking on the same overground course and treadmill (8 trials/condition: 20 seconds standing rest, 30 seconds task). Performance on the cognitive task was quantified as the number of correct subtractions, number of incorrect subtractions, number of self-corrected errors, and percent accuracy over the 8 trials. Walking speed (m/sec) was recorded for all walking conditions. fNIRS data were collected on a system consisting of 16 sources, 15 detectors, and 8 short-separation detectors in the following ROIs: right and left lateral frontal (RLF, LLF), right and left medial frontal (RMF, LMF), right and left medial superior frontal (RMSF, LMSF), and right and left motor (RM, LM). Lateral and medial refer to ROIs’ relative positions on lateral prefrontal cortex. fNIRS data were analyzed in Homer3 using a spline motion correction and the iterative weighted least squares method in the general linear model. Correlations between the cognitive/speed variables and ROI HbO data were applied using a Bonferroni adjustment for multiple comparisons.
Results:
Subjects with missing cognitive data were excluded from analyses, resulting in sample sizes of 18 for the single-task cognition, dual-task overground walking, and dual-task treadmill walking conditions. During dual-task overground walking, there was a significant positive correlation between walking speed and relative change in HbO in RMSF [r(18)=.51, p<.05] and RM [r(18)=.53, p<.05)]. There was a significant negative correlation between total number of correct subtractions and relative change in HbO in LMSF ([r(18)=-.75, p<.05] and LM [r(18)=-.52, p<.05] during dual-task overground walking. No other significant correlations were identified.
Conclusions:
These results indicate that there is lateralization of the cognitive and motor components of overground dual-task walking. The right hemisphere appears to be more active the faster people walk during the dual-task. By contrast, the left hemisphere appears to be less active when people are working faster on the cognitive task (i.e., serial-3 subtraction). The latter results suggest that automaticity of the cognitive task (i.e., more total correct subtractions) is related to decreased brain activity in the left hemisphere. Future research will investigate whether there is a change in cognitive automaticity over trials and if there are changes in lateralization patterns in neurodegenerative disorders that are known to differentially affect the hemispheres (e.g., Parkinson’s disease).
Older adults often spontaneously use compensatory strategies (CS) to support everyday memory and daily task completion. Recent work suggests that evaluating the quality of CS provides utility in predicting real-world prospective memory (PM) task completion. However, there has been little exploration of how CS quality may vary based on PM demands. This study examined differences in CS use and task completion accuracy across time-based (TB) and event-based (EB) PM tasks. Based on differences in self-monitoring demands and ability to engage in cognitive offloading, it was hypothesized that participants would utilize better quality strategies for TB tasks than EB tasks, which would lead to superior accuracy in completing TB tasks.
Participants and Methods:
Seventy community-dwelling older adults (Mage = 70.80, SD = 7.87) completed two testing sessions remotely from home via Zoom. Participants were presented two TB PM tasks (paying bill by due date, calling lab at specified time) and two EB PM tasks (presenting a packed bag to examiner upon a cue, initiating discussion about physical activity log upon cue). Participants were encouraged to use their typical CS to support task completion. Quality of CS (0-3 points per task step) and accuracy of task completion (0-4 points per task) were evaluated through lab-developed coding schemas. For each task, CS Quality scores were assigned based on how well strategies supported retrospective memory (RM) and PM task elements, and RM and PM Quality scores were summed to yield a Total Quality score. Because each task consisted of a different number of steps, CS Quality scores for each task were divided by their respective number of steps to yield measures of average quality. Paired-samples t-tests examined differences in average CS quality (Total, RM, and PM) and PM accuracy across TB and EB tasks.
Results:
Participants’ Total CS Quality was equivalent for TB tasks (M = 1.92, SD = 0.64) and EB tasks (M = 1.87, SD = 0.68), t(69) = 0.60, p = .55. Comparisons of subscores revealed that while participants used similar quality RM supports for TB tasks (M = 1.67, SD = 0.66) and EB tasks (M = 1.78, SD = 0.68), t(69) = 1.39, p = .17, participants utilized superior quality PM supports for TB tasks (M = 2.16, SD = 0.70) compared to EB tasks (M = 1.97, SD = 0.73), t(69) = 2.46, p = .02. Additionally, participants completed TB tasks with greater accuracy (M = 3.21, SD = 0.74) than EB tasks (M = 2.84, SD = 0.89), t(69) = 3.62, p < .001.
Conclusions:
While participants exhibited similar quality CS for RM components across TB and EB tasks, they displayed superior quality CS for PM components of TB tasks. This difference in quality may have contributed to participants completing real-world TB PM tasks with greater accuracy than EB tasks. Results contrast with trends in lab-based PM tasks, in which participants usually complete EB tasks more accurately. Findings may have implications for interventions, such as an enhanced focus on teaching high-quality CS to support real-world EB tasks.
Cognitive impairment is observed in up to two-thirds of persons with Multiple Sclerosis (MS). Impairments in cognitive processing speed (PS) is the most prevalent cognitive disturbance, occurs early in the course of disease and is strongly associated with disease progression, various brain parameters and everyday life functional activities. As such, cognitive rehabilitation for PS impairments should be an integral part of MS treatment and management. The current study examines the efficacy of Speed of Processing Training (SOPT) to improve processing speed (PS) in individuals with Multiple Sclerosis (MS). SOPT was chosen because of its significant positive results in the aging populations.
Participants and Methods:
This double-blind, placebo-controlled randomized clinical trial included 84 participants with clinically definite MS and impaired PS, 43 in the treatment group and 41 in the placebo control group. Outcomes included changes in the Useful Field of View (UFOV) and neuropsychological evaluation (NPE) including measure of PS (e.g., Pattern Comparison and Letter Comparison). Participants completed a baseline NPE and a repeat NPE post-treatment. Treatment consisted of 10 sessions delivered twice per week for 5 weeks. After the 5 weeks, the treatment group was randomized to booster sessions or no contact. Long-term follow-up assessments were completed 6 months after completion of treatment. The primary outcome were tests of PS including UFOV and neuropsychological testing.
Results:
A significant effect of SOPT was observed on both the UFOV (large effect) and Pattern Comparison with a similar pattern of results noted on Letter Comparison, albeit at a trend level. The treatment effect was maintained 6-months later. The impact of booster sessions was not significant. Correlations between degree of improvement on the UFOV and the number of levels completed within each training task were significant for both Speed and Divided Attention indicating that completion of more levels of training correlated with greater benefit.
Conclusions:
SOPT is effective for treating PS deficits in MS with benefit documented on both the UFOV and a neuropsychological measure of PS. Less benefit was observed as the outcome measures became more distinct in cognitive demands from the treatment. Long-term maintenance was observed. The number of training levels completed within the 10-sessions exerted a significant impact on treatment benefit, with more levels completed resulting in greater benefit.
Neurodegeneration in Alzheimer’s disease (AD) is typically assessed through brain MRI, and proprietary software can provide normative quantification of regional atrophy. However, proprietary software can be cost-prohibitive for research settings. Thus, we used the freely available software NOrmative Morphometry Image Statistics (NOMIS) which generates normative z-scores of segmented T1-weighted images from FreeSurfer to determine if these scores replicate established patterns of neurodegeneration in the context of amnestic mild cognitive impairment (aMCI), and whether these measures correlate with episodic memory test performance.
Participants and Methods:
Patients with aMCI (n = 25) and cognitively normal controls (CN; n = 74) completed brain MRI and two neuropsychological tests of episodic memory (the Rey Auditory Verbal Learning Test and the Wechsler Logical Memory Tests I & II), from which a single composite of normed scores was computed. A subset returned for follow-up (aMCI n = 11, CN n = 52) after ∼15 months and completed the same procedures. T1-weighted images were segmented using FreeSurfer v6.0 and the outputs were submitted to NOMIS to generate normative morphometric estimates for AD-relevant regions (i.e., hippocampus, parahippocampus, entorhinal cortex, amygdala) and control regions (i.e., cuneus, lingual gyrus, pericalcarine gyrus), controlling for age, sex, head size, scanner manufacturer, and field strength. Baseline data were used to test for differences in ROI volumes and memory between groups and to assess the within-group associations between ROI volumes and memory performance. We also evaluated changes in ROI volumes and memory over the follow-up interval by testing the main effects of time, group, and the group X time interactions. Lastly, we tested whether change in volume was associated with declines in memory.
Results:
At baseline, the aMCI group performed 2 SD below the CN group on episodic memory and exhibited smaller volumes in all AD-relevant regions (volumes 0.4 - 1.2 SD below CN group, ps < .041). There were no group differences in control region volumes. Memory performance was associated with volumes of the AD-relevant regions in the aMCI group (average rho = .51) but not with control regions. ROI volumes were not associated with memory in the CN group. At follow-up, the aMCI group continued to perform 2 SD below the CN group on episodic memory tests; however, change of performance over time did not differ between groups. The aMCI group continued to exhibit smaller volumes in all AD-relevant regions than the CN group, with greater declines in hippocampal volume (17% annual decline vs. 8% annual decline) and entorhinal volume (54% annual decline vs. 5% annual decline). There was a trending Group X Time interaction such that decrease in hippocampal volume was marginally associated with decline in memory for the aMCI group but not the CN group.
Conclusions:
Normative morphometric values generated from freely available software demonstrated expected patterns of group differences in AD-related volumes and associations with memory. Significant effects were localized to AD-relevant brain regions and only occurred in the aMCI group. These findings support the validity of these free tools as reliable and cost-effective alternatives to proprietary software.
Physical inactivity is associated with a greater risk of frailty, neuropsychiatric symptoms, worse quality of life, and increased risk for Alzheimer’s disease. Little is known about how physical activity engagement of older adults during the COVID-19 pandemic relates to subjective cognitive concerns and management of emotional distress. This study aimed to examine whether there were changes in physical activity during the pandemic in older adults at baseline and 3 months compared to before the pandemic and whether these changes varied based on age, sex, income level, and employment status. Further, we examined whether individuals who reported engaging in less physical activity experienced greater subjective cognitive difficulties and symptoms of depression and anxiety than those who maintained or increased their physical activity levels.
Participants and Methods:
301 participants (73% non-Hispanic whites) completed an online survey in either English or Spanish between May and October 2020 and 3 months later. The Everyday Cognition Scale was used to measure subjective cognitive decline, the CES-D-R-10 scale to measure depressive symptoms, and the GAD-7 scale to measure anxiety symptoms. Changes in physical activity were measured with the question “Since the coronavirus disease pandemic began, what has changed for you or your family in regard to physical activity or exercise levels?” with options “less physical activity,” “increase in physical activity,” or “same activity level.” Income was self-reported as high, middle, or low. Analyses of chi-squared tests were used to examine differences in physical activity maintenance by age, income level, sex, and employment status.
Results:
Most individuals (60%) reported having decreased their physical activity levels during the pandemic, at baseline and 3-month followup. There were differences in physical activity levels based on income and age: participants with a high income reported engaging in more physical activity than those with low income (X^2=4.78, p =.029). At the 3-month follow-up, middle-income participants reported being less active than the high-income earners (X^2=8.92, p=.003), and younger participants (55-65 years, approximately) reported being less active than older participants (X^2=5.28, p =.022). Those who reported an increase in their physical activity levels had fewer cognitive concerns compared to those who were less active at baseline, but this difference was not seen in the 3-month follow-up. Participants of all ages who reported having maintained or increased their physical activity levels had fewer depressive symptoms than those who were less active (p < 0.0001). Those who reported maintaining their physical activity levels exhibited fewer anxiety symptoms than those who were less active (p < 0.01).
Conclusions:
Older adults reported changes in physical activity levels during the pandemic and some of these changes varied by sociodemographic factors. Further, maintaining physical activity levels was associated with lower symptoms of depression, anxiety, and cognitive concerns. Encouraging individuals and providing resources for increasing physical activity may be an effective way to mitigate some of the pandemic’s adverse effects on psychological wellbeing and may potentially help reduce the risk for cognitive decline. Alternately, it is possible that improving emotional distress could lead to an increase in physical activity levels and cognitive health.
Nonpathological aging has been linked to decline in both verbal and visuospatial memory abilities in older adults. Disruptions in resting-state functional connectivity within well-characterized, higherorder cognitive brain networks have also been coupled with poorer memory functioning in healthy older adults and in older adults with dementia. However, there is a paucity of research on the association between higherorder functional connectivity and verbal and visuospatial memory performance in the older adult population. The current study examines the association between resting-state functional connectivity within the cingulo-opercular network (CON), frontoparietal control network (FPCN), and default mode network (DMN) and verbal and visuospatial learning and memory in a large sample of healthy older adults. We hypothesized that greater within-network CON and FPCN functional connectivity would be associated with better immediate verbal and visuospatial memory recall. Additionally, we predicted that within-network DMN functional connectivity would be associated with improvements in delayed verbal and visuospatial memory recall. This study helps to glean insight into whether within-network CON, FPCN, or DMN functional connectivity is associated with verbal and visuospatial memory abilities in later life.
Participants and Methods:
330 healthy older adults between 65 and 89 years old (mean age = 71.6 ± 5.2) were recruited at the University of Florida (n = 222) and the University of Arizona (n = 108). Participants underwent resting-state fMRI and completed verbal memory (Hopkins Verbal Learning Test - Revised [HVLT-R]) and visuospatial memory (Brief Visuospatial Memory Test - Revised [BVMT-R]) measures. Immediate (total) and delayed recall scores on the HVLT-R and BVMT-R were calculated using each test manual’s scoring criteria. Learning ratios on the HVLT-R and BVMT-R were quantified by dividing the number of stimuli (verbal or visuospatial) learned between the first and third trials by the number of stimuli not recalled after the first learning trial. CONN Toolbox was used to extract average within-network connectivity values for CON, FPCN, and DMN. Hierarchical regressions were conducted, controlling for sex, race, ethnicity, years of education, number of invalid scans, and scanner site.
Results:
Greater CON connectivity was significantly associated with better HVLT-R immediate (total) recall (ß = 0.16, p = 0.01), HVLT-R learning ratio (ß = 0.16, p = 0.01), BVMT-R immediate (total) recall (ß = 0.14, p = 0.02), and BVMT-R delayed recall performance (ß = 0.15, p = 0.01). Greater FPCN connectivity was associated with better BVMT-R learning ratio (ß = 0.13, p = 0.04). HVLT-R delayed recall performance was not associated with connectivity in any network, and DMN connectivity was not significantly related to any measure.
Conclusions:
Connectivity within CON demonstrated a robust relationship with different components of memory function as well across verbal and visuospatial domains. In contrast, FPCN only evidenced a relationship with visuospatial learning, and DMN was not significantly associated with memory measures. These data suggest that CON may be a valuable target in longitudinal studies of age-related memory changes, but also a possible target in future non-invasive interventions to attenuate memory decline in older adults.
Gulf War (GW) veterans were exposed to many neurotoxicants during the 1990-1991 Gulf War. Neurotoxicants included: chemical warfare such as sarin nerve gas, combustion byproducts from oil well fires and diesel fuels from tent heaters, pesticides, and prophylactic anti- nerve gas pyridostigmine bromide pills (PB); all of which have been associated with both cognitive and mood concerns. There are few longitudinal studies that have examined cognitive functioning regarding these toxicant exposures. In our longitudinal Fort Devens cohort, we found decrements over time in the area of verbal learning and memory but no differences in measures of nonverbal memory and executive function. To describe changes more accurately over time in this GW veteran cohort, we examined cognitive functioning in those with probable Post-Traumatic Stress Disorder (PTSD) versus those without.
Participants and Methods:
The FDC is the longest running cohort of GW veterans with initial baseline cognitive, mood, exposure and trauma assessments in 1997-1998 and follow-up evaluations in 2019-2022. FDC veterans (N=48) who completed both time points were the participants for this study. Veterans were categorized into dichotomous (yes/no) groups of PTSD classification. The PTSD checklist (PCL) was used to determine PTSD case status. Symptom ratings on the PCL were summed (range:17-85) and a cutoff score of 36 or higher was utilized to indicate probable PTSD. Neuropsychological measures of mood (POMS) and memory (Visual Reproductions from the Weschler Memory Scale-R; California Verbal Learning Test Second Edition; CVLT2) and executive function and language; (Delis-Kaplan Executive Function System- Color Word and Verbal fluency- Animals) were compared overtime using Paired T-tests.
Results:
The study sample (N=48) was 92% male and 96% reported active-duty status at the time of the GW. Mean current age was 58 years. All veterans reported exposure to at least one war-related toxicant. 48% met criteria for probable PTSD (N = 23) while 52% did not (n=25). No differences between groups were found in any of the POMS subscales, nor were differences seen in verbal memory, executive function, or language tasks. There were, however, significant differences in nonverbal memory in those with probable PTSD showing fewer details recalled during delay on the WMS-R Visual Reproductions (p<0.05).
Conclusions:
In this longitudinal analysis, GW veterans with PTSD showed declines in nonverbal memory and consistent levels of function in all other tasks. Basic mood scales did not show decline; therefore, these results are not due to generalized changes in mood. All participants reported at least one neurotoxicant exposure and we did not have the power to examine the impact of the individual exposures, thus we cannot rule any contributing factors other than PTSD. This study highlights the importance of longitudinal follow up and continual documentation of GW veterans’ memory performance and their endorsement of mood symptoms overtime. Specifically, these findings reveal that future studies should examine the prolonged course of memory and mood symptomatology in GW veterans who have endorsed a traumatic experience.
Trait mindfulness is associated with reduced stress and psychological well-being. However, evidence regarding its effects on cognitive function is mixed and certain facets of trait mindfulness are associated with higher negative affect (NA). This study investigated whether specific mindfulness skills were associated with cognitive performance and affective traits.
Participants and Methods:
165 older adults from the Maine Aging Behavior Learning Enrichment (M-ABLE) Study completed the National Alzheimer’s Coordinating Center T-Cog battery, the Five Facet Mindfulness Questionnaire, and the Positive and Negative Affect Schedule-SF.
Results:
All five facets of trait mindfulness were associated with higher Positive Affect and lower NA, with the exception that Observation was not associated with trait NA. Partial correlations adjusting for age indicated that better episodic memory was associated with Observation, Describing, and Nonreactivity. Verbal fluency performance was associated with Observation, while Working Memory was associated with Nonjudgment. Executive Attention/Processing speed was associated with total mindfulness scores and showed a trend relationship with Nonreactivity.
Conclusions:
Mindfulness skills showed specific patterns with affective traits and cognitive function. These findings suggest that the ability to maintain awareness, describe, and experience internal and external states without reacting to them may partly rely on episodic memory. Mindful awareness skills also may depend on frontal and language functions, while the ability to experience emotional states without reacting may require Executive Attention. Global mindfulness and a non-judgmental stance may require auditory attention. Alternatively, mindfulness skills may serve to enhance these functions. Hence, longitudinal research is needed to determine the directionality of these findings.
Certain contextual factors, including non-restorative sleep (Niermeyer & Suchy, 2020), sleep deprivation (Lim & Dinges, 2010), burdensome emotion regulation (Franchow & Suchy, 2017), and pain interference (Boselie, Vancleef, & Peters, 2016) have been shown to contribute to temporary declines in executive functioning (EF). Contextually-induced decrements in EF in turn have been associated with temporary decrements in performance of instrumental activities of daily living (IADLs) among healthy older adults (Brothers & Suchy, 2021; Suchy et al., 2020; Niermeyer & Suchy 2020). Furthermore, some evidence suggests that higher variability in levels of contextual factors across days (i.e., deviations from routine) may contribute to IADL lapses above and beyond average, albeit high, levels of these contextual burdens (Bielak, Mogle, & Sliwinksi, 2019; Brothers & Suchy, 2021). Taken together, these findings highlight the importance of accounting for transient contextual burdens when assessing EF and IADL abilities in older adults.
Poor sleep quality has been associated with poor IADL performance (Fung et al., 2012; Holfeld & Ruthing, 2012) when assessed in a single visit. However, the potential contributions of variable sleep quantity and quality on IADL performance have not been assessed in healthy older adults using longitudinal methods. Accordingly, the aim of this study was to examine the impact of fluctuations in sleep quantity and quality, assessed daily, above and beyond average levels, on at-home IADL performance across 18 days in a group of community-dwelling older adults.
Participants and Methods:
Fifty-two non-demented community-dwelling older adults (M age = 69 years, 65% female) completed 18 days of at-home IADL tasks, as well as daily ecological momentary assessment (EMA) measures of EF, sleep hours, and restfulness questions. An 18-day mean EMA EF score was computed controlling for practice effects. Mean levels of and variability in EMA sleep hours and EMA restfulness ratings were computed. IADL scores were computed for timeliness and accuracy across the 18 days.
Results:
A series of hierarchical linear regressions were run using separate IADL timeliness and accuracy as the dependent variable. In the first step, demographics (age, sex, education) were entered. Then, EMA EF was entered, followed by mean EMA sleep hours and EMA mean restfulness, and lastly, variability in EMA sleep hours and EMA restfulness. EMA EF was found to significantly predict both IADL accuracy (B = .46, p = .001) and timeliness (B = .45, p = .005). Variability in EMA sleep hours (B = .40, p = .008) and restfulness (B = -.29, p = .043) both predicted IADL accuracy beyond other variables, while mean levels did not. Additionally, variability in sleep hours and restfulness substantially improved the prediction of IADL accuracy above and beyond other variables in the model, accounting for an additional 16% of variance (F (2) = 3.80, ∆ R2 = .16, p = .006). Neither mean levels of or variability in sleep hours or restfulness predicted IADL timeliness.
Conclusions:
Results suggest that greater fluctuations in the amount and quality of sleep across days may render healthy older adults more susceptible to lapses in daily functioning abilities, particularly the accuracy with which IADL tasks are completed.
Lower levels of social support in persons with Multiple Sclerosis (PwMS) are associated with myriad poor outcomes including worse mental health, lower quality of life, and reduced motor function (Kever et al., 2021). Social support has also been associated with physical pain (Alphonsus et al., 2021) and sleep disturbance (Harris et al., 2020) in PwMS. Pain is one of the most common symptoms of MS (Valentine et al., 2022) and is also known to be related to sleep disturbance (Neau et al., 2012). With these considerations in mind, the goal of the current study was to examine social support as a possible moderator in the relationship between pain and sleep quality in PwMS.
Participants and Methods:
This cross-sectional study included 91 PwMS (females = 76). A neuropsychological battery and psychosocial questionnaires were administered. For sleep quality a composite was created from the sleep and rest scale of the Sickness Impact Profile (SIP), sleep-related items on the Multiple Sclerosis-Symptom Severity Scale (MS-SSS) (i.e., sleeping too much or sleep disturbance, fatigue or tiredness, and not sleeping enough), and an item from the Sleep Habits Questionnaire (SHQ) ("How many nights on average are you troubled by disturbed sleep?"). This composite (a = .76) has been used in prior research. Lower scores were indicative of worse sleep quality. Pain intensity and pain interference were measured using the Brief Pain Inventory (BPI). Pain intensity was calculated from four pain indices (i.e., pain at its worst in the last 24 hours, at its least in the last 24 hours, on average, and current pain at the time of the assessment) and pain interference was calculated from seven indices (i.e., general activity, mood, walking ability, normal work, relationships with others, sleep, and enjoyment of life). The Social Support Questionnaire (SSQ) measured average satisfaction with supports. A series of hierarchical linear regressions were conducted with the sleep quality index as the outcome variable and satisfaction with social supports, both indices of pain (intensity and interference), and their interactions as predictors. Then, simple effects tests were used to clarify the pattern of any significant interactions.
Results:
Regression analysis revealed that the interaction between pain interference and satisfaction with social support was significant (p = .034). Simple effects tests revealed that when satisfaction with social support was high, pain interference was associated with better sleep quality (p < .001). The interaction between pain intensity and satisfaction with social supports was also significant (p = .014). Simple effects test revealed that at high levels of satisfaction with social supports, pain intensity was associated with better sleep quality (p < .001).
Conclusions:
Satisfaction with social support moderated the relationship between pain interference and pain intensity on sleep quality in PwMS. Specifically, high satisfaction with social support buffers against the negative effects of pain interference and pain intensity on sleep quality in PwMS. This provides evidence that interventions aimed at increasing social supports in PwMS may lead improvements in sleep quality and reduce the impact of pain on sleep quality.
Attention is the backbone of cognitive systems and is requisite for many cognitive processes vital to everyday functioning, including memory, problem solving, and the cognitive control of behavior. Attention is commonly impaired following traumatic brain injury and is a critical focus of rehabilitation efforts. The development of reliable methods to assess rehabilitation-related changes are paramount. The Attention Network Test (ANT) has been used previously to identify 3 independent, yet interactive attention networks—alerting, orienting, and executive control (EC). We examined the behavioral and neurophysiological robustness and temporal stability of these networks across multiple sessions to assess the ANT’s potential utility as an effective measure of change during attention rehabilitative interventions.
Participants and Methods:
15 healthy young adults completed 4 sessions of the ANT (1 session/7-day period). ANT networks were assessed within the task by contrasting opposing stimulus conditions: cued vs. non-cued trials probed alerting, valid vs. invalid spatial cues probed orienting, and congruent vs. incongruent targets probed EC. Differences in median correct-trial reaction times (RTs) and error rates (ERs) between the condition pairs were assessed to determine attention network scores; robustness of networks effects, as determined by one-sample t-tests at each session, against a mean of 0, determining the presence of significant network effects at each session. Sixty-four-channel electroencephalography (EEG) data were acquired concurrently and processed using Matlab to create condition-related event-related potentials (ERPs)—particularly the cue- and probe-related P1, N1, and P3 deflection amplitudes, measured by using signed-area calculation in regions of interest (ROIs) determined by observation of spherical-spline voltages. This enabled us to examine the robustness of cue- and probe-attention-network ERPs.
Results:
All three attention networks showed robust effects. However, only the EC RT and ER network scores remained significantly robust [t(14)s>13.9,ps<.001] across all sessions, indicating that EC is robust in the face of repeated exposure. Session 1 showed the greatest EC-RT robustness effect which became smaller during the subsequent sessions per ANOVAs on Session x Congruency [F(3,42)=10.21,p<.0001], reflecting persistence despite practice effects. RT robustness of the other networks varied across sessions. Alerting and EC ERs were similarly robust across all 4 sessions, but were more variable for the orienting network. ERP results: The cue-locked P1-orienting (valid vs. invalid) was generally larger to valid- than invalid-cues, but the robustness across sessions was variable (significant in only sessions 1 and 4 [t(14)s>2.13,ps<.04], as reflected in a significant main effect of session [p=.0042]. Next, target-locked EC P3s were generally smaller to congruent than incongruent targets [F(1,14)=9.40,p=.0084], showing robust effects only in sessions 3 and 4 [ps<.005].
Conclusions:
The EC network RT and ER scores were consistently robust across all sessions, suggesting that this network may be less vulnerable to practice effects across session than the other networks and may be the most reliable probe of attentional rehabilitation. ERP measures were more variable across attention networks with respect to robustness. Behavioral measures of EC-network may be most reliable for assessing progress related to attentional-rehabilitation efforts.