We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
People with psychotic disorders often experience neurocognitive deficits, such as neurocognitive impairment (NCI), which can negatively affect their daily activities (e.g., performing independent tasks) and recovery. Because of this, the American Psychology Association advocates integrating neurocognitive testing into routine care for people living with psychotic disorders, especially those in their first episode, to inform treatment and improve clinical outcomes. However, in low-and-middle income countries (LMICs), such as Uganda where the current study took place, administering neurocognitive tests in healthcare settings presents numerous challenges. In Uganda there are few resources (e.g., trained clinical staff, and culturally relevant and normed tests) to routinely offer testing in healthcare settings. NeuroScreen is a brief, highly automated, tablet-based neurocognitive testing tool that can be administered by all levels of healthcare staff and has been translated into indigenous Ugandan languages. To examine the psychometric properties of NeuroScreen, we measured convergent and criterion validity of the NeuroScreen tests by comparing performance on them to performance on a traditional battery of neurocognitive tests widely used to assess neurocognition in people with psychotic disorders, the Matric Consensus Cognitive Battery (MCCB).
Participants and Methods:
Sixty-five patients admitted into Butabika Mental Referral Hospital in Uganda after experiencing a psychotic episode and forty-seven demographically similar control participants completed two neurocognitive test batteries: the MCCB and NeuroScreen. Both batteries include tests measuring the neurocognitive domains of executive functioning, working memory, verbal learning, and processing speed. Prior to completing each battery, patients were medically stabilized and could not exhibit any positive symptoms on the day of testing. On the day of testing, medication dosages were scheduled so that patients would not experience sedative effects while testing. To examine convergent validity, we examined correlations between overall performance on NeuroScreen and the MCCB, as well as tests that measured the same neurocognitive domains. To examine criterion validity, an ROC curve was computed to examine the sensitivity and specificity of NeuroScreen to detect NCI as defined by the MCCB.
Results:
There was a large correlation between overall performance on NeuroScreen and the MCCB battery of tests, r(110) = .65, p < .001. Correlations of various strengths were found among tests measuring the same neurocognitive domains in each battery: executive functioning [r(110) = .56 p <.001], processing speed [r(110) = .44, p <.001)], working memory [r(110) = .29, p<.01], and verbal learning [r(110) = .22, p < .01]. ROC analysis of the ability of NeuroScreen to detect MCCB defined NCI showed an area under curve of .798 and optimal sensitivity and specificity of 83% and 60%, respectively.
Conclusions:
Overall test performance between the NeuroScreen and MCCB test batteries was similar in this sample of Ugandans with and without a psychotic disorder, with the strongest correlations in tests of executive functioning and processing speed. ROC analysis provided criterion validity evidence of NeuroScreen to detect MCCB defined NCI. These results provide support for use of NeuroScreen to assess neurocognitive functioning among patients with psychotic disorders in Uganda, however more work needs to be to determine how well it can be implemented in this setting. Future directions include assessing cultural acceptability of NeuroScreen and generating normative data from a larger population of Ugandan test-takers.
Functional near-infrared spectroscopy (fNIRS) is a non-invasive functional neuroimaging method that takes advantage of the optical properties of hemoglobin to provide an indirect measure of brain activation via task-related relative changes in oxygenated hemoglobin (HbO). Its advantage over fMRI is that fNIRS is portable and can be used while walking and talking. In this study, we used fNIRS to measure brain activity in prefrontal and motor region of interests (ROIs) during single- and dual-task walking, with the goal of identifying neural correlates.
Participants and Methods:
Nineteen healthy young adults [mean age=25.4 (SD=4.6) years; 14 female] engaged in five tasks: standing single-task cognition (serial-3 subtraction); single-task walking at a self-selected comfortable speed on a 24.5m oval-shaped course (overground walking) and on a treadmill; and dual-task cognition+walking on the same overground course and treadmill (8 trials/condition: 20 seconds standing rest, 30 seconds task). Performance on the cognitive task was quantified as the number of correct subtractions, number of incorrect subtractions, number of self-corrected errors, and percent accuracy over the 8 trials. Walking speed (m/sec) was recorded for all walking conditions. fNIRS data were collected on a system consisting of 16 sources, 15 detectors, and 8 short-separation detectors in the following ROIs: right and left lateral frontal (RLF, LLF), right and left medial frontal (RMF, LMF), right and left medial superior frontal (RMSF, LMSF), and right and left motor (RM, LM). Lateral and medial refer to ROIs’ relative positions on lateral prefrontal cortex. fNIRS data were analyzed in Homer3 using a spline motion correction and the iterative weighted least squares method in the general linear model. Correlations between the cognitive/speed variables and ROI HbO data were applied using a Bonferroni adjustment for multiple comparisons.
Results:
Subjects with missing cognitive data were excluded from analyses, resulting in sample sizes of 18 for the single-task cognition, dual-task overground walking, and dual-task treadmill walking conditions. During dual-task overground walking, there was a significant positive correlation between walking speed and relative change in HbO in RMSF [r(18)=.51, p<.05] and RM [r(18)=.53, p<.05)]. There was a significant negative correlation between total number of correct subtractions and relative change in HbO in LMSF ([r(18)=-.75, p<.05] and LM [r(18)=-.52, p<.05] during dual-task overground walking. No other significant correlations were identified.
Conclusions:
These results indicate that there is lateralization of the cognitive and motor components of overground dual-task walking. The right hemisphere appears to be more active the faster people walk during the dual-task. By contrast, the left hemisphere appears to be less active when people are working faster on the cognitive task (i.e., serial-3 subtraction). The latter results suggest that automaticity of the cognitive task (i.e., more total correct subtractions) is related to decreased brain activity in the left hemisphere. Future research will investigate whether there is a change in cognitive automaticity over trials and if there are changes in lateralization patterns in neurodegenerative disorders that are known to differentially affect the hemispheres (e.g., Parkinson’s disease).
Older adults often spontaneously use compensatory strategies (CS) to support everyday memory and daily task completion. Recent work suggests that evaluating the quality of CS provides utility in predicting real-world prospective memory (PM) task completion. However, there has been little exploration of how CS quality may vary based on PM demands. This study examined differences in CS use and task completion accuracy across time-based (TB) and event-based (EB) PM tasks. Based on differences in self-monitoring demands and ability to engage in cognitive offloading, it was hypothesized that participants would utilize better quality strategies for TB tasks than EB tasks, which would lead to superior accuracy in completing TB tasks.
Participants and Methods:
Seventy community-dwelling older adults (Mage = 70.80, SD = 7.87) completed two testing sessions remotely from home via Zoom. Participants were presented two TB PM tasks (paying bill by due date, calling lab at specified time) and two EB PM tasks (presenting a packed bag to examiner upon a cue, initiating discussion about physical activity log upon cue). Participants were encouraged to use their typical CS to support task completion. Quality of CS (0-3 points per task step) and accuracy of task completion (0-4 points per task) were evaluated through lab-developed coding schemas. For each task, CS Quality scores were assigned based on how well strategies supported retrospective memory (RM) and PM task elements, and RM and PM Quality scores were summed to yield a Total Quality score. Because each task consisted of a different number of steps, CS Quality scores for each task were divided by their respective number of steps to yield measures of average quality. Paired-samples t-tests examined differences in average CS quality (Total, RM, and PM) and PM accuracy across TB and EB tasks.
Results:
Participants’ Total CS Quality was equivalent for TB tasks (M = 1.92, SD = 0.64) and EB tasks (M = 1.87, SD = 0.68), t(69) = 0.60, p = .55. Comparisons of subscores revealed that while participants used similar quality RM supports for TB tasks (M = 1.67, SD = 0.66) and EB tasks (M = 1.78, SD = 0.68), t(69) = 1.39, p = .17, participants utilized superior quality PM supports for TB tasks (M = 2.16, SD = 0.70) compared to EB tasks (M = 1.97, SD = 0.73), t(69) = 2.46, p = .02. Additionally, participants completed TB tasks with greater accuracy (M = 3.21, SD = 0.74) than EB tasks (M = 2.84, SD = 0.89), t(69) = 3.62, p < .001.
Conclusions:
While participants exhibited similar quality CS for RM components across TB and EB tasks, they displayed superior quality CS for PM components of TB tasks. This difference in quality may have contributed to participants completing real-world TB PM tasks with greater accuracy than EB tasks. Results contrast with trends in lab-based PM tasks, in which participants usually complete EB tasks more accurately. Findings may have implications for interventions, such as an enhanced focus on teaching high-quality CS to support real-world EB tasks.
Cognitive impairment is observed in up to two-thirds of persons with Multiple Sclerosis (MS). Impairments in cognitive processing speed (PS) is the most prevalent cognitive disturbance, occurs early in the course of disease and is strongly associated with disease progression, various brain parameters and everyday life functional activities. As such, cognitive rehabilitation for PS impairments should be an integral part of MS treatment and management. The current study examines the efficacy of Speed of Processing Training (SOPT) to improve processing speed (PS) in individuals with Multiple Sclerosis (MS). SOPT was chosen because of its significant positive results in the aging populations.
Participants and Methods:
This double-blind, placebo-controlled randomized clinical trial included 84 participants with clinically definite MS and impaired PS, 43 in the treatment group and 41 in the placebo control group. Outcomes included changes in the Useful Field of View (UFOV) and neuropsychological evaluation (NPE) including measure of PS (e.g., Pattern Comparison and Letter Comparison). Participants completed a baseline NPE and a repeat NPE post-treatment. Treatment consisted of 10 sessions delivered twice per week for 5 weeks. After the 5 weeks, the treatment group was randomized to booster sessions or no contact. Long-term follow-up assessments were completed 6 months after completion of treatment. The primary outcome were tests of PS including UFOV and neuropsychological testing.
Results:
A significant effect of SOPT was observed on both the UFOV (large effect) and Pattern Comparison with a similar pattern of results noted on Letter Comparison, albeit at a trend level. The treatment effect was maintained 6-months later. The impact of booster sessions was not significant. Correlations between degree of improvement on the UFOV and the number of levels completed within each training task were significant for both Speed and Divided Attention indicating that completion of more levels of training correlated with greater benefit.
Conclusions:
SOPT is effective for treating PS deficits in MS with benefit documented on both the UFOV and a neuropsychological measure of PS. Less benefit was observed as the outcome measures became more distinct in cognitive demands from the treatment. Long-term maintenance was observed. The number of training levels completed within the 10-sessions exerted a significant impact on treatment benefit, with more levels completed resulting in greater benefit.
Neurodegeneration in Alzheimer’s disease (AD) is typically assessed through brain MRI, and proprietary software can provide normative quantification of regional atrophy. However, proprietary software can be cost-prohibitive for research settings. Thus, we used the freely available software NOrmative Morphometry Image Statistics (NOMIS) which generates normative z-scores of segmented T1-weighted images from FreeSurfer to determine if these scores replicate established patterns of neurodegeneration in the context of amnestic mild cognitive impairment (aMCI), and whether these measures correlate with episodic memory test performance.
Participants and Methods:
Patients with aMCI (n = 25) and cognitively normal controls (CN; n = 74) completed brain MRI and two neuropsychological tests of episodic memory (the Rey Auditory Verbal Learning Test and the Wechsler Logical Memory Tests I & II), from which a single composite of normed scores was computed. A subset returned for follow-up (aMCI n = 11, CN n = 52) after ∼15 months and completed the same procedures. T1-weighted images were segmented using FreeSurfer v6.0 and the outputs were submitted to NOMIS to generate normative morphometric estimates for AD-relevant regions (i.e., hippocampus, parahippocampus, entorhinal cortex, amygdala) and control regions (i.e., cuneus, lingual gyrus, pericalcarine gyrus), controlling for age, sex, head size, scanner manufacturer, and field strength. Baseline data were used to test for differences in ROI volumes and memory between groups and to assess the within-group associations between ROI volumes and memory performance. We also evaluated changes in ROI volumes and memory over the follow-up interval by testing the main effects of time, group, and the group X time interactions. Lastly, we tested whether change in volume was associated with declines in memory.
Results:
At baseline, the aMCI group performed 2 SD below the CN group on episodic memory and exhibited smaller volumes in all AD-relevant regions (volumes 0.4 - 1.2 SD below CN group, ps < .041). There were no group differences in control region volumes. Memory performance was associated with volumes of the AD-relevant regions in the aMCI group (average rho = .51) but not with control regions. ROI volumes were not associated with memory in the CN group. At follow-up, the aMCI group continued to perform 2 SD below the CN group on episodic memory tests; however, change of performance over time did not differ between groups. The aMCI group continued to exhibit smaller volumes in all AD-relevant regions than the CN group, with greater declines in hippocampal volume (17% annual decline vs. 8% annual decline) and entorhinal volume (54% annual decline vs. 5% annual decline). There was a trending Group X Time interaction such that decrease in hippocampal volume was marginally associated with decline in memory for the aMCI group but not the CN group.
Conclusions:
Normative morphometric values generated from freely available software demonstrated expected patterns of group differences in AD-related volumes and associations with memory. Significant effects were localized to AD-relevant brain regions and only occurred in the aMCI group. These findings support the validity of these free tools as reliable and cost-effective alternatives to proprietary software.
Physical inactivity is associated with a greater risk of frailty, neuropsychiatric symptoms, worse quality of life, and increased risk for Alzheimer’s disease. Little is known about how physical activity engagement of older adults during the COVID-19 pandemic relates to subjective cognitive concerns and management of emotional distress. This study aimed to examine whether there were changes in physical activity during the pandemic in older adults at baseline and 3 months compared to before the pandemic and whether these changes varied based on age, sex, income level, and employment status. Further, we examined whether individuals who reported engaging in less physical activity experienced greater subjective cognitive difficulties and symptoms of depression and anxiety than those who maintained or increased their physical activity levels.
Participants and Methods:
301 participants (73% non-Hispanic whites) completed an online survey in either English or Spanish between May and October 2020 and 3 months later. The Everyday Cognition Scale was used to measure subjective cognitive decline, the CES-D-R-10 scale to measure depressive symptoms, and the GAD-7 scale to measure anxiety symptoms. Changes in physical activity were measured with the question “Since the coronavirus disease pandemic began, what has changed for you or your family in regard to physical activity or exercise levels?” with options “less physical activity,” “increase in physical activity,” or “same activity level.” Income was self-reported as high, middle, or low. Analyses of chi-squared tests were used to examine differences in physical activity maintenance by age, income level, sex, and employment status.
Results:
Most individuals (60%) reported having decreased their physical activity levels during the pandemic, at baseline and 3-month followup. There were differences in physical activity levels based on income and age: participants with a high income reported engaging in more physical activity than those with low income (X^2=4.78, p =.029). At the 3-month follow-up, middle-income participants reported being less active than the high-income earners (X^2=8.92, p=.003), and younger participants (55-65 years, approximately) reported being less active than older participants (X^2=5.28, p =.022). Those who reported an increase in their physical activity levels had fewer cognitive concerns compared to those who were less active at baseline, but this difference was not seen in the 3-month follow-up. Participants of all ages who reported having maintained or increased their physical activity levels had fewer depressive symptoms than those who were less active (p < 0.0001). Those who reported maintaining their physical activity levels exhibited fewer anxiety symptoms than those who were less active (p < 0.01).
Conclusions:
Older adults reported changes in physical activity levels during the pandemic and some of these changes varied by sociodemographic factors. Further, maintaining physical activity levels was associated with lower symptoms of depression, anxiety, and cognitive concerns. Encouraging individuals and providing resources for increasing physical activity may be an effective way to mitigate some of the pandemic’s adverse effects on psychological wellbeing and may potentially help reduce the risk for cognitive decline. Alternately, it is possible that improving emotional distress could lead to an increase in physical activity levels and cognitive health.
Nonpathological aging has been linked to decline in both verbal and visuospatial memory abilities in older adults. Disruptions in resting-state functional connectivity within well-characterized, higherorder cognitive brain networks have also been coupled with poorer memory functioning in healthy older adults and in older adults with dementia. However, there is a paucity of research on the association between higherorder functional connectivity and verbal and visuospatial memory performance in the older adult population. The current study examines the association between resting-state functional connectivity within the cingulo-opercular network (CON), frontoparietal control network (FPCN), and default mode network (DMN) and verbal and visuospatial learning and memory in a large sample of healthy older adults. We hypothesized that greater within-network CON and FPCN functional connectivity would be associated with better immediate verbal and visuospatial memory recall. Additionally, we predicted that within-network DMN functional connectivity would be associated with improvements in delayed verbal and visuospatial memory recall. This study helps to glean insight into whether within-network CON, FPCN, or DMN functional connectivity is associated with verbal and visuospatial memory abilities in later life.
Participants and Methods:
330 healthy older adults between 65 and 89 years old (mean age = 71.6 ± 5.2) were recruited at the University of Florida (n = 222) and the University of Arizona (n = 108). Participants underwent resting-state fMRI and completed verbal memory (Hopkins Verbal Learning Test - Revised [HVLT-R]) and visuospatial memory (Brief Visuospatial Memory Test - Revised [BVMT-R]) measures. Immediate (total) and delayed recall scores on the HVLT-R and BVMT-R were calculated using each test manual’s scoring criteria. Learning ratios on the HVLT-R and BVMT-R were quantified by dividing the number of stimuli (verbal or visuospatial) learned between the first and third trials by the number of stimuli not recalled after the first learning trial. CONN Toolbox was used to extract average within-network connectivity values for CON, FPCN, and DMN. Hierarchical regressions were conducted, controlling for sex, race, ethnicity, years of education, number of invalid scans, and scanner site.
Results:
Greater CON connectivity was significantly associated with better HVLT-R immediate (total) recall (ß = 0.16, p = 0.01), HVLT-R learning ratio (ß = 0.16, p = 0.01), BVMT-R immediate (total) recall (ß = 0.14, p = 0.02), and BVMT-R delayed recall performance (ß = 0.15, p = 0.01). Greater FPCN connectivity was associated with better BVMT-R learning ratio (ß = 0.13, p = 0.04). HVLT-R delayed recall performance was not associated with connectivity in any network, and DMN connectivity was not significantly related to any measure.
Conclusions:
Connectivity within CON demonstrated a robust relationship with different components of memory function as well across verbal and visuospatial domains. In contrast, FPCN only evidenced a relationship with visuospatial learning, and DMN was not significantly associated with memory measures. These data suggest that CON may be a valuable target in longitudinal studies of age-related memory changes, but also a possible target in future non-invasive interventions to attenuate memory decline in older adults.
Gulf War (GW) veterans were exposed to many neurotoxicants during the 1990-1991 Gulf War. Neurotoxicants included: chemical warfare such as sarin nerve gas, combustion byproducts from oil well fires and diesel fuels from tent heaters, pesticides, and prophylactic anti- nerve gas pyridostigmine bromide pills (PB); all of which have been associated with both cognitive and mood concerns. There are few longitudinal studies that have examined cognitive functioning regarding these toxicant exposures. In our longitudinal Fort Devens cohort, we found decrements over time in the area of verbal learning and memory but no differences in measures of nonverbal memory and executive function. To describe changes more accurately over time in this GW veteran cohort, we examined cognitive functioning in those with probable Post-Traumatic Stress Disorder (PTSD) versus those without.
Participants and Methods:
The FDC is the longest running cohort of GW veterans with initial baseline cognitive, mood, exposure and trauma assessments in 1997-1998 and follow-up evaluations in 2019-2022. FDC veterans (N=48) who completed both time points were the participants for this study. Veterans were categorized into dichotomous (yes/no) groups of PTSD classification. The PTSD checklist (PCL) was used to determine PTSD case status. Symptom ratings on the PCL were summed (range:17-85) and a cutoff score of 36 or higher was utilized to indicate probable PTSD. Neuropsychological measures of mood (POMS) and memory (Visual Reproductions from the Weschler Memory Scale-R; California Verbal Learning Test Second Edition; CVLT2) and executive function and language; (Delis-Kaplan Executive Function System- Color Word and Verbal fluency- Animals) were compared overtime using Paired T-tests.
Results:
The study sample (N=48) was 92% male and 96% reported active-duty status at the time of the GW. Mean current age was 58 years. All veterans reported exposure to at least one war-related toxicant. 48% met criteria for probable PTSD (N = 23) while 52% did not (n=25). No differences between groups were found in any of the POMS subscales, nor were differences seen in verbal memory, executive function, or language tasks. There were, however, significant differences in nonverbal memory in those with probable PTSD showing fewer details recalled during delay on the WMS-R Visual Reproductions (p<0.05).
Conclusions:
In this longitudinal analysis, GW veterans with PTSD showed declines in nonverbal memory and consistent levels of function in all other tasks. Basic mood scales did not show decline; therefore, these results are not due to generalized changes in mood. All participants reported at least one neurotoxicant exposure and we did not have the power to examine the impact of the individual exposures, thus we cannot rule any contributing factors other than PTSD. This study highlights the importance of longitudinal follow up and continual documentation of GW veterans’ memory performance and their endorsement of mood symptoms overtime. Specifically, these findings reveal that future studies should examine the prolonged course of memory and mood symptomatology in GW veterans who have endorsed a traumatic experience.
Trait mindfulness is associated with reduced stress and psychological well-being. However, evidence regarding its effects on cognitive function is mixed and certain facets of trait mindfulness are associated with higher negative affect (NA). This study investigated whether specific mindfulness skills were associated with cognitive performance and affective traits.
Participants and Methods:
165 older adults from the Maine Aging Behavior Learning Enrichment (M-ABLE) Study completed the National Alzheimer’s Coordinating Center T-Cog battery, the Five Facet Mindfulness Questionnaire, and the Positive and Negative Affect Schedule-SF.
Results:
All five facets of trait mindfulness were associated with higher Positive Affect and lower NA, with the exception that Observation was not associated with trait NA. Partial correlations adjusting for age indicated that better episodic memory was associated with Observation, Describing, and Nonreactivity. Verbal fluency performance was associated with Observation, while Working Memory was associated with Nonjudgment. Executive Attention/Processing speed was associated with total mindfulness scores and showed a trend relationship with Nonreactivity.
Conclusions:
Mindfulness skills showed specific patterns with affective traits and cognitive function. These findings suggest that the ability to maintain awareness, describe, and experience internal and external states without reacting to them may partly rely on episodic memory. Mindful awareness skills also may depend on frontal and language functions, while the ability to experience emotional states without reacting may require Executive Attention. Global mindfulness and a non-judgmental stance may require auditory attention. Alternatively, mindfulness skills may serve to enhance these functions. Hence, longitudinal research is needed to determine the directionality of these findings.
Certain contextual factors, including non-restorative sleep (Niermeyer & Suchy, 2020), sleep deprivation (Lim & Dinges, 2010), burdensome emotion regulation (Franchow & Suchy, 2017), and pain interference (Boselie, Vancleef, & Peters, 2016) have been shown to contribute to temporary declines in executive functioning (EF). Contextually-induced decrements in EF in turn have been associated with temporary decrements in performance of instrumental activities of daily living (IADLs) among healthy older adults (Brothers & Suchy, 2021; Suchy et al., 2020; Niermeyer & Suchy 2020). Furthermore, some evidence suggests that higher variability in levels of contextual factors across days (i.e., deviations from routine) may contribute to IADL lapses above and beyond average, albeit high, levels of these contextual burdens (Bielak, Mogle, & Sliwinksi, 2019; Brothers & Suchy, 2021). Taken together, these findings highlight the importance of accounting for transient contextual burdens when assessing EF and IADL abilities in older adults.
Poor sleep quality has been associated with poor IADL performance (Fung et al., 2012; Holfeld & Ruthing, 2012) when assessed in a single visit. However, the potential contributions of variable sleep quantity and quality on IADL performance have not been assessed in healthy older adults using longitudinal methods. Accordingly, the aim of this study was to examine the impact of fluctuations in sleep quantity and quality, assessed daily, above and beyond average levels, on at-home IADL performance across 18 days in a group of community-dwelling older adults.
Participants and Methods:
Fifty-two non-demented community-dwelling older adults (M age = 69 years, 65% female) completed 18 days of at-home IADL tasks, as well as daily ecological momentary assessment (EMA) measures of EF, sleep hours, and restfulness questions. An 18-day mean EMA EF score was computed controlling for practice effects. Mean levels of and variability in EMA sleep hours and EMA restfulness ratings were computed. IADL scores were computed for timeliness and accuracy across the 18 days.
Results:
A series of hierarchical linear regressions were run using separate IADL timeliness and accuracy as the dependent variable. In the first step, demographics (age, sex, education) were entered. Then, EMA EF was entered, followed by mean EMA sleep hours and EMA mean restfulness, and lastly, variability in EMA sleep hours and EMA restfulness. EMA EF was found to significantly predict both IADL accuracy (B = .46, p = .001) and timeliness (B = .45, p = .005). Variability in EMA sleep hours (B = .40, p = .008) and restfulness (B = -.29, p = .043) both predicted IADL accuracy beyond other variables, while mean levels did not. Additionally, variability in sleep hours and restfulness substantially improved the prediction of IADL accuracy above and beyond other variables in the model, accounting for an additional 16% of variance (F (2) = 3.80, ∆ R2 = .16, p = .006). Neither mean levels of or variability in sleep hours or restfulness predicted IADL timeliness.
Conclusions:
Results suggest that greater fluctuations in the amount and quality of sleep across days may render healthy older adults more susceptible to lapses in daily functioning abilities, particularly the accuracy with which IADL tasks are completed.
Lower levels of social support in persons with Multiple Sclerosis (PwMS) are associated with myriad poor outcomes including worse mental health, lower quality of life, and reduced motor function (Kever et al., 2021). Social support has also been associated with physical pain (Alphonsus et al., 2021) and sleep disturbance (Harris et al., 2020) in PwMS. Pain is one of the most common symptoms of MS (Valentine et al., 2022) and is also known to be related to sleep disturbance (Neau et al., 2012). With these considerations in mind, the goal of the current study was to examine social support as a possible moderator in the relationship between pain and sleep quality in PwMS.
Participants and Methods:
This cross-sectional study included 91 PwMS (females = 76). A neuropsychological battery and psychosocial questionnaires were administered. For sleep quality a composite was created from the sleep and rest scale of the Sickness Impact Profile (SIP), sleep-related items on the Multiple Sclerosis-Symptom Severity Scale (MS-SSS) (i.e., sleeping too much or sleep disturbance, fatigue or tiredness, and not sleeping enough), and an item from the Sleep Habits Questionnaire (SHQ) ("How many nights on average are you troubled by disturbed sleep?"). This composite (a = .76) has been used in prior research. Lower scores were indicative of worse sleep quality. Pain intensity and pain interference were measured using the Brief Pain Inventory (BPI). Pain intensity was calculated from four pain indices (i.e., pain at its worst in the last 24 hours, at its least in the last 24 hours, on average, and current pain at the time of the assessment) and pain interference was calculated from seven indices (i.e., general activity, mood, walking ability, normal work, relationships with others, sleep, and enjoyment of life). The Social Support Questionnaire (SSQ) measured average satisfaction with supports. A series of hierarchical linear regressions were conducted with the sleep quality index as the outcome variable and satisfaction with social supports, both indices of pain (intensity and interference), and their interactions as predictors. Then, simple effects tests were used to clarify the pattern of any significant interactions.
Results:
Regression analysis revealed that the interaction between pain interference and satisfaction with social support was significant (p = .034). Simple effects tests revealed that when satisfaction with social support was high, pain interference was associated with better sleep quality (p < .001). The interaction between pain intensity and satisfaction with social supports was also significant (p = .014). Simple effects test revealed that at high levels of satisfaction with social supports, pain intensity was associated with better sleep quality (p < .001).
Conclusions:
Satisfaction with social support moderated the relationship between pain interference and pain intensity on sleep quality in PwMS. Specifically, high satisfaction with social support buffers against the negative effects of pain interference and pain intensity on sleep quality in PwMS. This provides evidence that interventions aimed at increasing social supports in PwMS may lead improvements in sleep quality and reduce the impact of pain on sleep quality.
Attention is the backbone of cognitive systems and is requisite for many cognitive processes vital to everyday functioning, including memory, problem solving, and the cognitive control of behavior. Attention is commonly impaired following traumatic brain injury and is a critical focus of rehabilitation efforts. The development of reliable methods to assess rehabilitation-related changes are paramount. The Attention Network Test (ANT) has been used previously to identify 3 independent, yet interactive attention networks—alerting, orienting, and executive control (EC). We examined the behavioral and neurophysiological robustness and temporal stability of these networks across multiple sessions to assess the ANT’s potential utility as an effective measure of change during attention rehabilitative interventions.
Participants and Methods:
15 healthy young adults completed 4 sessions of the ANT (1 session/7-day period). ANT networks were assessed within the task by contrasting opposing stimulus conditions: cued vs. non-cued trials probed alerting, valid vs. invalid spatial cues probed orienting, and congruent vs. incongruent targets probed EC. Differences in median correct-trial reaction times (RTs) and error rates (ERs) between the condition pairs were assessed to determine attention network scores; robustness of networks effects, as determined by one-sample t-tests at each session, against a mean of 0, determining the presence of significant network effects at each session. Sixty-four-channel electroencephalography (EEG) data were acquired concurrently and processed using Matlab to create condition-related event-related potentials (ERPs)—particularly the cue- and probe-related P1, N1, and P3 deflection amplitudes, measured by using signed-area calculation in regions of interest (ROIs) determined by observation of spherical-spline voltages. This enabled us to examine the robustness of cue- and probe-attention-network ERPs.
Results:
All three attention networks showed robust effects. However, only the EC RT and ER network scores remained significantly robust [t(14)s>13.9,ps<.001] across all sessions, indicating that EC is robust in the face of repeated exposure. Session 1 showed the greatest EC-RT robustness effect which became smaller during the subsequent sessions per ANOVAs on Session x Congruency [F(3,42)=10.21,p<.0001], reflecting persistence despite practice effects. RT robustness of the other networks varied across sessions. Alerting and EC ERs were similarly robust across all 4 sessions, but were more variable for the orienting network. ERP results: The cue-locked P1-orienting (valid vs. invalid) was generally larger to valid- than invalid-cues, but the robustness across sessions was variable (significant in only sessions 1 and 4 [t(14)s>2.13,ps<.04], as reflected in a significant main effect of session [p=.0042]. Next, target-locked EC P3s were generally smaller to congruent than incongruent targets [F(1,14)=9.40,p=.0084], showing robust effects only in sessions 3 and 4 [ps<.005].
Conclusions:
The EC network RT and ER scores were consistently robust across all sessions, suggesting that this network may be less vulnerable to practice effects across session than the other networks and may be the most reliable probe of attentional rehabilitation. ERP measures were more variable across attention networks with respect to robustness. Behavioral measures of EC-network may be most reliable for assessing progress related to attentional-rehabilitation efforts.
Primary headache disorder is characterized by recurrent headaches which lack underlying causative pathology or trauma. Primary headache disorder is common and encompasses several subtypes including migraine. Vestibular migraine (VM) is a subtype of migraine that causes vestibular symptoms such as vertigo, difficulties with balance, nausea, and vomiting. Literature indicates subjective and performance-based cognitive problems (executive dysfunction) among migraineurs. This study compared the magnitude of the total effect size across neuropsychological domains to determine if there is a reliable difference in effect sizes between individuals with VM and healthy controls (HC). An additional aim was to meta-analyze neuropsychological outcomes in migraine subtypes (other than VM) in reference to healthy controls.
Participants and Methods:
This study was a part of a larger study examining neuropsychological functioning and impairment in individuals with primary headache disorder and HCs. Standardized search terms were applied in OneSearch and PubMed. The search interval covered articles published from 1986 to May 2021. Analyses were random-effects models. Hedge’s g was used as a bias-corrected estimate of effect size. Between-study heterogeneity was assessed using Cochran’s Q and I2. Publication bias was assessed with Duval and Tweedie’s Trim-and-Fill method to identify evidence of missing studies.
Results:
The initial omnibus literature search yielded 6692 studies. Three studies (n=151 VM and 150 HC) met our inclusion criteria of having a VM group and reported neuropsychological performance. VM demonstrated significantly worse performance overall when compared to HCs (k=3, g=-0.99, p<0.001; Q=4.41, I2=54.66) with a large effect size. Within-domain effects of VM were: Executive Functioning=-0.99 (Q=0.62, I2=0), Screener=-1.15 (Q=3.29, I2=69.59), and Visuospatial/Construction=-1.47 (Q=0.001, I2=0.00). Compared to chronic migraine (k=3, g=-0.59, p<0.001; Q=0.68, I2=0.00) and migraine without aura (k=23, g=-0.39, p<0.001; Q=109.70, I2=79.95), VM was the only migraine subgroup to display a large effect size. Trim-and-fill procedure estimated zero VM studies to be missing due to publication bias (adjusted g=-0.99, Q=4.41).
Conclusions:
This initial attempt at a meta-analysis of cognitive deficits in VM was hampered by a lack of studies in this area. Based on our initial findings, individuals with VM demonstrated overall worse performances on neuropsychological tests compared to HCs with the greatest level of impairment seen in visuospatial/construction. Additionally, VM resulted in a large effect size while other migraine subtypes yielded small to moderate effect sizes. Despite the small sample of studies, the overall effect across neuropsychological performance was generally stable (i.e., low between-study heterogeneity). Given than VM accounts for 7% of patients seen in vertigo clinics and 9% of all migraine patients, our results suggest that neuropsychological impairment in VM deserves significantly more study.
The serial position effect is the tendency to recall items at the beginning (primacy) and end (recency) of a word list best and middle items the worst, demonstrated by a 'U-shaped’ profile. Individuals with memory impairment often demonstrate a 'J-shaped’ profile, with a diminished primacy effect. An attenuated primacy effect could be one of the earliest indicators of cognitive decline in older adults. Chronic elevations in cortisol are related to hippocampal atrophy and decreased learning and recall. Given the rehearsal and encoding required to recall words at the beginning of a list, we hypothesized that reduced primacy would be related to higher cortisol levels, measured via hair cortisol concentration, in older adults, particularly caregivers of people with dementia (PWD), who are under increased stress.
Participants and Methods:
Data were taken from a deidentified dataset of 60 community-dwelling older adults (> 50) with no evidence of dementia who participated in a larger study on memory and caregiving stress; 26 identified themselves as caregivers of PWD. The sample was 83% women and 98% White, with a mean age of 67.58 (SD=8.85) and 80% holding at least a college degree. Stress was measured with the Perceived Stress Scale. The List Learning and List Recall subtests from the Repeatable Battery for the Assessment of Neuropsychological Status were used to assess the serial position effect. Primacy and recency were determined by the first three and last three words on the list, respectively, and were measured for trials 1-4. Relative strength of primacy versus recency at delayed recall was also calculated such that positive scores indicate better primacy than recency and negative scores indicate worse primacy than recency (J-shaped profile). Hair samples were collected, and the first one cm of hair was used to assay hair cortisol concentration, reflecting the past month of cortisol.
Results:
Caregivers were younger than non-caregivers (p<.001), but groups did not differ in gender (p=.412). Age was controlled for in all subsequent analyses. Caregivers reported more stress (p<.001), but groups were not different in hair cortisol (p=.093). On memory tasks, caregivers showed lower list learning raw scores (p=.002) and lower list recall raw score (p=.046); groups were not different in primacy learning (p=.114), but caregivers showed worse recency over learning trials (p<.001). Caregivers were not more likely to show the J-shaped serial position profile at recall (p=.285). Collapsed across groups, perceived stress was not related to cortisol (p=.124) but was related to recency (p=.001) and list learning raw (p=.004), but not list recall raw (p=.485) or primacy (p=.109). Cortisol was not related to primacy (p=.277) or recency (p=.538).
Conclusions:
Contrary to predictions, caregivers were not worse on primacy but were worse on recency. Caregivers also reported more stress; collapsed across groups, stress was associated with recency performance. This may suggest that stress is related more to poor attention and short-term memory (recency) than encoding and recall related memory problems (primacy).
Cognitive disengagement syndrome (CDS; previously known as “sluggish cognitive tempo” or SCT) refers to a set of behavioral symptoms characterized by slowed thinking/behavior, daydreaming, and mental fogginess or confusion. It has been described as related to, yet separate from, symptoms associated with Attention-deficit Hyperactivity Disorder (ADHD) inattention. There is a paucity of research on CDS within pediatric epilepsy populations despite substantial risk factors inherent to the disorder and a large proportion of patients with comorbid ADHD. This study therefore describes CDS as reported by parents for a large sample of children with epilepsy. Relationship between epilepsy variables (e.g., number of antiepileptic drugs [AEDs], seizure frequency, seizure type) and CDS symptoms was explored. Additionally, considering the negative association between CDS and academic performance in other populations, the relationship between parentrated CDS and academic risk factors was examined.
Participants and Methods:
Participants included 151 children with epilepsy (mean age = 11y, range 6-18y; 55% male; IQ>70), referred for outpatient neuropsychological assessment. As part of routine clinical care, parents completed the Penny Sluggish Cognitive Tempo Scale (SCT) and the Colorado Learning Difficulties Questionnaire (CLDQ). Scores and basic demographic information were extracted from an IRB approved clinical database; the IRB granted approval for retrospective chart review to extract additional medical variables. Parent report of CDS included total CDS score and three subdomains: Sleepy/Sluggish, Low Initiation, and Daydreamy. Higher scores represent greater parent-reported difficulties. Independent samples t-tests compared the participants’ means on total CDS and each subdomain to the normative sample. Analysis of variance was conducted to determine differential impact of seizure type (Generalized, Focal, or Multifocal) on total CDS and each subdomain. Correlations between other medical variables, scores on the CLDQ, and parent ratings on the SCT were examined.
Results:
Parents of children with epilepsy rated overall CDS total and subdomain scores as significantly higher compared to the normative means with highest elevation in symptoms of Low Initiation (p = <.001). Total CDS was associated with increased parent-reported academic difficulties; however, of the three subdomains, only Low Initiation was significantly associated with concerns for academic functioning. Number of AEDs was associated with increased symptoms on the Sleepy/Sluggish subdomain only. Seizure frequency was associated with total CDS and Sleepy/Sluggish symptoms, though this finding is likely mediated by increased number of AEDs for those with more frequent seizures. Seizure type was not associated with significant differences in Total CDS or CDS subdomains.
Conclusions:
Children with epilepsy are at increased risk for experiencing slowed thinking and cognitive disengagement. Low initiation is particularly elevated in pediatric epilepsy populations, which may lead to increased academic difficulties. Potential interventions targeting low initiation may therefore have benefit in the academic setting for children with epilepsy, regardless of epilepsy type.
There are many common beliefs within the general public about Chronic Traumatic Encephalopathy (CTE) that contradict research findings and scientific evidence. Therefore, the goal of this study was to examine the accuracy of CTE knowledge across three diverse samples.
Participants and Methods:
The three groups included in the sample were 333 college students (54%), 196 individuals from the public (32%), and 90 psychology trainees/clinicians (54%) for a total of 619 participants. Online surveys were used to collect the CTE knowledge accuracy (i.e., the number correct divided by the total number of questions) of the sample. The questions about CTE were adapted from Merz et al. (2017) and from the Sports Neuropsychology Society’s “CTE: A Q and A Fact Sheet.”
Results:
Overall, CTE knowledge accuracy was 52% (M = 51%, SD = .24). Regarding inaccurate beliefs, two-thirds of the sample believed that CTE was related to sports participation alone even if a head injury did not occur, and most participants believed that CTE could be caused by a single injury. Additionally, confidence in CTE knowledge was positively correlated with willingness to allow their child to play a high contact sport despite overall low CTE knowledge accuracy. Last, many participants reported education (67%) and health care providers (61%) as their main sources of CTE information while only 18% of participants cited television/movies. However, when asked to provide additional details about their CTE information source, many participants cited ESPN specials and the movie “Concussion” as the main reason they learned of the condition and sought out additional information.
Conclusions:
The results of this study are consistent with previous research on CTE knowledge accuracy. This further supports the need for clinicians and researchers to address misconceptions by providing information and scientific facts.
Motor skills have been linked to executive functions (EFs) in typically developing school-, and preschool-age children. Yet fine motor skills have been more consistently correlated with EFs than gross motor skills, perhaps because they are more frequently investigated. Preterm born children are vulnerable to deficits in both gross and fine motor skills, even after exclusion of neurological cases. In addition to motor skills, EFs may also be compromised in preterm born preschoolers. Because premature birth increases the odds for atypical brain development, and since adverse effects on brain functioning tend to yield increased dispersion of performance scores, we wished to determine whether fine and gross motor skills are differentially linked to performance on tasks measuring EF skills in nonhandicapped preschoolers born preterm.
Participants and Methods:
We studied 99 preterm (born < 34 weeks) singleton preschoolers (3-4 years of age; 50 females), all graduates of the Neonatal Intensive Care Unit at William Beaumont Hospital, Royal Oak, MI. Motor skills were assessed with the Peabody Developmental Motor Scales - (Second Edition) which provide Fine and Gross Motor Quotients (FMQ, and GMQ, respectively). Three core EFs were measured: working memory, motor inhibition, and verbal fluency. Working memory skills were assessed with two Clinical Evaluation of Language Fundamentals - Preschool -Second Edition subtests: Recalling Sentences (RS) and Concepts and Following Directions (CFD). Motor inhibition and verbal fluency were assessed with the NEPSY-II Statue and Word Generation (WG) subtests, respectively. Children with a history of moderate to severe intracranial pathology or cerebral palsy were excluded.
Results:
We conducted linear regression analyses using scaled scores from the Statue, WG, RS, and CFD subtests as the predicted variables. Predictors of interest were the FMQ and GMQ. We adjusted for sociodemographic factors (SES and sex) and perinatal risk (gestational age, sum of antenatal complications and birth weight SD). The GMQ was significantly associated with all four EF measures (Statue, t(84) = 4.13, p < .001; CFD, t(92) = 3.83, p < .001; WG, t(84) = 3.38, p = .001; RS, t(90) = 3.37, p = .001). The FMQ was significantly associated with three of four EF measures (Statue, t(84) = 3.41, p = .001; CFD, t(92) = 3.97, p < .001; WG, t(84) = 1.96, p = .054; RS, t(90) = 2.91, p = .005).
Conclusions:
Both fine and gross motor skills were associated with EF in nonhandicapped preterm-born singletons. Lower motor functioning in either motor domain was linked to reduction in performance on diverse EF measures. It should be emphasized that motor performance contributed to explaining variance in EFs even after statistical adjustment for early medical risk. In addition to the obvious conclusion that motor skills may underpin EF skills, it is likely that early risk factors not captured by the medical risk variables used in our analyses were nonetheless tapped by variability in motor performance. As preschool EFs are essential for subsequent academic performance, the significance of age-appropriate motor development in the preschool age should not be underestimated in our at-risk population.
There is an ongoing debate among statisticians and discipline scientists about the consequences of our persistent, dogmatic reliance on evaluating all statistical results as meaningful if and only if "p<0.05," regardless of context. This was never the intended goal of Ronald Fisher, nevertheless scientists have adopted it as a convenience, and the decades long dependence on "p<0.05" has had important negative consequences. In this presentation, I review common misconceptions about interpreting p-values, why we should consider de-emphasizing p-values, and why scientists should rely more on practical, clinical, or scientifically meaningful differences over arbitrary cut-offs. I will present several different metrics for evaluating and reporting effect magnitude, and whether or not data support the null vs. alternative hypothesis, under the frequentist paradigm, how Bayesian methods can augment or replace frequentist analyses, and a few options that help to clarify how important a finding may be. Throughout this talk, I advocate that discipline scientists take charge of sharing scientific results that are not based merely on arbitrary p-value cutoffs and other default logic, but instead based on their content expertise, in light of all of the specific relevant aspects of experimental design and experimental data, balancing the consequences of Type I vs Type II errors appropriately, and focusing on characterizing effects, rather than dichotomizing research into only two categories of importance (significant vs. not).
Upon conclusion of this course, learners will be able to:
1. Discuss what p-values mean and how they are commonly misinterpreted.
2. Explain the leading arguments promoted by the American Statistical Association with regard to why science should carefully reconsider if and how p-values should continue to dominate our decisions about what research should be published, and how scientists should be evaluating its worth.
3. Apply new practices in how to evaluate and publish their own research, as well as how to evaluate research appearing in peer-reviewed journals, whether as consumers, reviewers, or editors.
Previous research has found that measures of premorbid intellectual functioning may be predictive of performance on memory tasks among older adults (Duff, 2010). Intellectual functioning itself is correlated with education. The purpose of this study was to investigate the incremental validity of a measure of premorbid intellectual functioning over education levels to predict performance on the Virtual Environment Grocery Store (VEGS), which involves a simulated shopping experience assessing learning, memory, and executive functioning.
Participants and Methods:
Older adults (N = 118, 60.2% female, age 60-90, M = 73.51, SD = 7.46) completed the Wechsler Test of Adult Reading and the VEGS.
Results:
WTAR and education level explained 9.4% of the variance in VEGS long delay free recall, F = 5.97, p = 0.003). WTAR was a significant predictor (ß = 0.25, p = 0.006), while level of education was not.
Conclusions:
These results suggest that crystalized intelligence may benefit recall on a virtual reality shopping task.
Although some animal research suggests possible sex differences in response to THC exposure (e.g., Cooper & Craft, 2018), there are limited human studies. One study found that among individuals rarely using cannabis, when given similar amounts of oral and vaporized THC females report greater subjective intoxication compared to males (Sholler et al., 2020). However, in a study of daily users, females reported indistinguishable levels of intoxication compared to males after smoking similar amounts (Cooper & Haney, 2014), while males and females using 1–4x/week showed similar levels of intoxication, despite females having lower blood THC and metabolite concentrations (Matheson et al., 2020). It is important to elucidate sex differences in biological indicators of cannabis intoxication given potential driving/workplace implications as states increasingly legalize use. The current study examined if when closely matching males and females on cannabis use variables there are predictable sex differences in residual whole blood THC and metabolite concentrations, and THC/metabolites, subjective appraisals of intoxication, and driving performance following acute cannabis consumption.
Participants and Methods:
The current study was part of a randomized clinical trial (Marcotte et al., 2022). Participants smoked ad libitum THC cigarettes and then completed driving simulations, blood draws, and subjective measures of intoxication. The main outcomes were the change in Composite Drive Score (CDS; global measure of driving performance) from baseline, whole blood THC, 11-OH-THC, and THC-COOH levels (ng/mL), and subjective ratings of how “high” participants felt (0 = not at all, 100 = extremely). For this analysis of participants receiving active THC, males were matched to females on 1) estimated THC exposure (g) in the last 6 months (24M, 24F) or 2) whole blood THC concentrations immediately post-smoking (23M, 23F).
Results:
When matched on THC exposure in the past 6 months (overall mean of 46 grams; p = .99), there were no sex differences in any cannabinoid/metabolite concentrations at baseline (all p > .83) or after cannabis administration (all p > .72). Nor were there differences in the change in CDS from pre-to-post-smoking (p = .26) or subjective “highness” ratings (p = .53). When matched on whole blood THC concentrations immediately after smoking (mean of 34 ng/mL for both sexes, p = .99), no differences were found in CDS change from pre-to-post smoking (p = .81), THC metabolite concentrations (all p > .25), or subjective “highness” ratings (p = .56). For both analyses, males and females did not differ in BMI (both p > .7).
Conclusions:
When male/female cannabis users are well-matched on use history, we find no significant differences in cannabinoid concentrations following a mean of 5 days of abstinence, suggesting that there are no clear biological differences in carryover residual effects. We also find no significant sex differences following ad libitum smoking in driving performance, subjective ratings of “highness,” nor whole blood THC and metabolite concentrations, indicating that there are no biological differences in acute response to THC. This improves upon previous research by closely matching participants over a wider range of use intensity variables, although the small sample size precludes definitive conclusions.