To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Attention of the research community on childhood cancer has grown exponentially over the last 5 decades (Robinson & Hudson, 2014). With research attention growing rapidly, cure rates have increased just as dramatically, with survivorship well over 80% (Ward, et al., 2014). With survivorship on the rise, research has turned to the examination of late effects in survivors of childhood cancer, especially neuropsychological late effects (Krull, et al., 2018). Late effects, functional impairment, and the awareness of one’s own impairment can create several lasting issues in a survivor’s life (Oeffinger, et al., 2010). The objective of this study is to explore the feasibility and functionality of a group intervention for this population.
Participants and Methods:
Participants were recruited from a pediatric cancer institute in southern California. To be considered for inclusion, participants must have completed curative treatment for childhood cancer, not be currently undergoing treatment for childhood cancer, be free of any severe and persistent mental illnesses, and have access to a stable internet connection (for Zoom sessions). This study examined the impact of an Acceptance and Commitment Therapy (ACT)-based group intervention protocol on survivors of childhood cancer. Specifically, this study explored a strategy to identify early neuropsychological late effects and a strategy to improve these impacts. The group intervention was conducted via Zoom (www.zoom.us) which provided an opportunity to continue to provide this service in the wake of COVID-19. Data was collected at baseline and at the completion of the group intervention. This data focused on the functional and perceived impacts of neuropsychological sequelae in these participants, as well as the changes as related to the group intervention.
Results:
Data did not show any significant changes from baseline to follow-up in this population. The lack of significance was likely due to a severely truncated sample size. Despite the lack of significant findings, data appears to trend negatively. Although these findings do not provide conclusive evidence for this ACT-based group as an intervention for neuropsychological late effects in survivors of childhood cancer, the data suggested some interesting trends which will be explored further in this presentation.
Conclusions:
The results of this study help to further explore the importance of attention to neuropsychological symptoms and issues in survivors of childhood cancer, especially within the first few years following the completion of treatment. As survivorship continues to increase, it will be of utmost importance to continue to examine the impact of neuropsychological late effects and how the field of neuropsychology can best serve this population. This study was severely limited by a small sample size, a single clinician providing the protocol, and a truncated timeline. Further research will examine the impact of this study protocol in a larger sample size, which will likely increase the ability to reject the null hypothesis. In addition, future research must also be conducted to better explore strategies of early and consistent neuropsychological intervention in this population.
Survivors of childhood ALL treated with CNS-directed chemotherapy are at risk for neurocognitive deficits that emerge during treatment and impact functional and quality of life outcomes throughout survivorship. Neurocognitive monitoring is the recommended standard of care for this population; however, information on assessment timing and recommendations for assessment measures are limited. We examined the role of serial neurocognitive monitoring completed during protocol-directed therapy in predicting parent-reported neurocognitive late effects during survivorship.
Participants and Methods:
Parents of 61 survivors of childhood ALL completed a semi-structured survey focused on parent perspective of neurocognitive late effects as part of a quality improvement project. Survivors completed protocol-directed treatment for newly diagnosed ALL on two consecutive clinical trials (St. Jude Total Therapy Study 15, 47.5%; Total Therapy 16, 52.5%). The majority of survivors were White (86.9%), 52.5% were male, and 49% were treated for low risk disease. Mean age at diagnosis was 7.77 years (standard deviation [SD] = 5.31). Mean age at survey completion was 15.25 years (SD = 6.29). Survivors completed neurocognitive monitoring at two prospectively determined time points during and at the end of protocol-directed therapy for childhood ALL.
Results:
During survivorship, parents reported that 73.8% of survivors experienced neurocognitive late effects, with no difference in frequency of endorsement by protocol (p = .349), age at diagnosis (p = .939), patient sex (p = .417), or treatment risk arm (p = .095). In survivors with late effects, 44.3% sought intervention in the form of educational programming (i.e., 504 or Individualized Education Program). Among the group with late effects, compared to those without educational programming, those with educational programming had worse verbal learning (CVLT Trials 1-5 Total, Mean[SD]; T = 56.36 [11.19], 47.00 [10.12], p = .047) and verbal memory (CVLT Short Delay Free Recall, Z = 0.86 [0.67], -0.21 [1.01], p = .007); Long Delay Free Recall, Z = 0.91 [0.92], -0.25 [1.25], p = .020) during therapy. Compared to those without educational programming, survivors with educational programming had lower estimated IQ (SS = 109.25 [13.48], 98.07 [15.74], p = .045) and greater inattention [CPT Beta T = 56.80 [13.95], 75.70 [22.93], p = .017) at the end of therapy.
Conclusions:
Parents report that nearly three quarters of children treated for ALL with chemotherapy only experience neurocognitive late effects during early survivorship, with no difference in frequency by established risk factors. Of those with late effects, nearly half required educational programming implemented after diagnosis, suggesting a significant impact on school performance. Results from neurocognitive monitoring beginning during therapy has utility for predicting educational need in survivors experiencing late effects. Our findings provide direction on the timing and content of neurocognitive monitoring, which is the recommended standard of care for childhood cancer patients treated with CNS-directed therapy.
The global prevalence of persons living with dementia will soon exceed 50 million. Most of these individuals reside in low- and middle-income countries (LMICs). In South Africa, one such LMIC, the physician-to-patient ratio of 9:10 000 severely limits the capacity of clinicians to screen, assess, diagnose, and treat dementias. One way to address this limitation is by using mobile health (mHealth) platforms to scale-up neurocognitive testing. In this paper, we describe one such platform, a brief tablet-based cognitive assessment tool (NeuroScreen) that can be administered by lay health-providers. It may help identify patients with cognitive impairment (related, for instance, to dementia) and thereby improve clinical care and outcomes. However, there is a lack of data regarding (a) the acceptability of this novel technology for delivery of neurocognitive assessments in LMIC-resident older adults, and (b) the influence of technology-use experience on NeuroScreen performance of LMIC-resident older adults. This study aimed to fill that knowledge gap, using a sample of cognitively impaired South African older adults.
Participants and Methods:
Participants were 60 older adults (63.33% female; 91.67% right-handed; age M = 68.90 years, SD = 9.42, range = 50-83), all recruited from geriatric and memory clinics in Cape Town, South Africa. In a single 1-hour session, they completed the entire NeuroScreen battery (Trail Making, Number Speed, Finger Tapping, Visual Discrimination, Number Span Forward, Number Span Backward, List Learning, List Recall) as well as a study-specific questionnaire assessing acceptability of NeuroScreen use and overall experience and comfort with computer-based technology. We summed across 11 questionnaire items to derive a single variable capturing technology-use experience, with higher scores indicating more experience.
Results:
Almost all participants (93.33%) indicated that NeuroScreen was easy to use. A similar number (90.00%) indicated they would be comfortable completing NeuroScreen at routine doctor's visits. Only 6.67% reported feeling uncomfortable using a tablet, despite about three-quarters (76.67%) reporting never having used a tablet with a touchscreen before. Almost one in five participants (18.33%) reported owning a computer, 10.00% a tablet, and 70.00% a smartphone. Correlations between test performance and technology-use experience were statistically significant (or strongly tended toward significance) for most NeuroScreen subtests that assessed higherorder cognitive functioning and that required the participant to manipulate the tablet themselves: Trail Making 2 (a measure of cognitive switching ability), r = .24, p = .05; Visual Discrimination A (complex processing speed [number-symbol matching]), r = .38, p = .002; Visual Discrimination B (pattern recognition), r = .37, p = .004; Number Speed (simple information processing speed), r = .36, p = .004. For the most part, there were no such significant associations when the NeuroScreen subtest required only verbal input from the participant (i.e., on the list learning and number span tasks).
Conclusions:
NeuroScreen, a tablet-based neurocognitive screening tool, appears feasible for use among older South Africans, even if they are cognitively impaired and have limited technological familiarity. However, test performance might be influenced by amount of technology-use experience; clinicians using the battery must consider this in their interpretations.
To present the Mobile Toolbox (MTB), comprised of an expandable library of cognitive and other tests, including adapted versions of NIH Toolbox® measures. The MTB provides a complete research platform for app creation, study management, data collection, and data management. We will describe the MTB project and MTB research platform and demonstrate examples of assessments.
Participants and Methods:
MTB is the product of an NIH-funded, multi-institutional effort involving Northwestern University, Sage Bionetworks, Penn State, University of California San Francisco, University of California San Diego, Emory University, and Washington University. The MTB assessment library is a dynamic repository built upon Sage Bionetworks mobile health platform. All MTB measures are created or adapted for a mobile interface using iOS and Android smartphones. Guided by the principles of open science, many components are open source to allow researchers and developers to integrate externally developed tests, including supplemental scales (e.g., passively collected contextual factors) assessing variables such as mood and fatigue that might influence cognitive test performance.
Results:
The current MTB library includes eight core cognitive tests based on well-established neuropsychological measures: two language tasks (Spelling and Word Meaning), two executive functioning tasks (Arrow Matching and Shape-Color Sorting), an associative memory task (Faces and Names), an episodic memory task (Arranging Pictures), a working memory task (Sequences) and a processing speed task (Numbers and Symbols). Additional cognitive assessments from other popular test libraries including the International Cognitive Ability Resource (ICAR), Cognitive Neuroscience Test Reliability and Clinical Applications for Schizophrenia (CNTRACS) and Test My Brain are currently being implemented, as are non-cognitive measures from the NIH Toolbox Emotion Battery and the Patient-Reported Outcomes Measurement Information System (PROMIS). The MTB library includes measures suitable for use in research studies incorporating point-in-time and burst designs as well as ecological momentary assessment (EMA).
Conclusions:
The MTB was created to address many of the scientific, practical, and technical challenges to cognitive assessment by capitalizing on advances in technology measurement and cognitive research. Initial psychometric evaluation of measures has been performed, and additional clinical validation is underway in studies with persons at risk for cognitive impairment or Alzheimer’s disease (AD), diagnosed with mild cognitive impairment (MCI) or AD, Parkinson’s disease, and HIV-associated Neurocognitive Disorders. Calculation of norms and reliable change indicators is in progress. The MTB is currently available to beta testers with public release planned for Summer, 2023. Clinical researchers will be able to use the MTB system to design smartphone-based test batteries, deploy and manage mobile data collection in their research studies, and aggregate and analyze results in the context of large-scale norming data.
Although the cognitive profiles of people experiencing homelessness have been described in the literature, the neuropsychological profile of people experiencing complex homelessness has not been delineated. Complex homelessness is homelessness that continues despite the provision of bricks and mortar solutions. People experiencing complex homelessness often have an array of physical health, mental health, substance use, neurodevelopmental and neurocognitive disorders. The present study aimed to delineate the neuropsychological profile of people experiencing complex homelessness and explore the utility of neuropsychological assessment in supporting this population.
Participants and Methods:
19 people experiencing complex homelessness in Sydney, Australia, were consecutively referred by specialist homelessness services for neuropsychological assessment. They underwent comprehensive assessment of intelligence, memory and executive functioning and completed questionnaires to screen for the presence of ADHD, PTSD, depression, anxiety and stress. A range of performance validity measures were included. Referrers were asked to complete questionnaires on history of childhood trauma, psychological functioning, drug and alcohol use, functional cognitive abilities, homelessness factors, personality, risk of cognitive impairment and adaptive functioning and to note existing or suspected mental health, neurodevelopmental and neurocognitive disorders. Referrers also completed a post-assessment pathways questionnaires to identify whether the neuropsychological assessment facilitated referral pathways (e.g., for government housing or financial assistance). Clinicians completed a post-assessment diagnosis survey, which was compared to the pre-assessment known or suspected diagnoses. Finally, referrers were asked to complete a satisfaction questionnaire regarding the neuropsychological assessment.
Results:
Mean (SD) WAIS-IV indexes were VCI = 81.1 (14.5), PRI = 86.1 (10.9), WMI = 80.5 (13.0), PSI = 81.6 (10.2). Mean WMS-IV Flexible (LMVR) indexes were AMI = 68.3 (19.6), VMI = 77.1 (19.3), IMI = 72.7 (17.2), and DMI = 70.5 (17.6). The majority of participants showed unusual differences between WAIS-IV and TOPF-predicted WAIS-IV scores and between WAIS-IV General Ability and WMS-IV Flexible (LMVR) scores. Demographically corrected scores on tests of executive functioning were mostly one or more standard deviations below the mean. The majority of participants screened positive on screening measures of executive dysfunction, PTSD and ADHD and had elevated self-reported psychological distress scores. At least one new diagnosis was made for nine (47%) participants, established diagnoses were confirmed for two (11%) participants, diagnoses were supported for 15 (79%) participants, tentative diagnoses were made for 16 (84%) participants, and five (26%) participants had at least one diagnosis disconfirmed/unsupported. Referrers indicated that the majority of post-assessment pathways were more accessible following the neuropsychological assessment and that they were very satisfied with the neuropsychological assessments overall.
Conclusions:
This is one of the first studies to delineate the neuropsychological profile of people experiencing complex homelessness using robust psychometric approaches, including performance validity tests. This population experiences a high burden of cognitive impairment and associated substance use, neurodevelopmental and mental health comorbidities. Neuropsychological assessment makes referral pathways more accessible and is valued by referrers of people experiencing complex homelessness.
There is increasing recognition of cognitive and pathological heterogeneity in early-stage Alzheimer’s disease and other dementias. Data-driven approaches have demonstrated cognitive heterogeneity in those with mild cognitive impairment (MCI), but few studies have examined this heterogeneity and its association with progression to MCI/dementia in cognitively unimpaired (CU) older adults. We identified cluster-derived subgroups of CU participants based on comprehensive neuropsychological data and compared baseline characteristics and rates of progression to MCI/dementia or a Dementia Rating Scale (DRS) of <129 across subgroups.
Participants and Methods:
A hierarchical cluster analysis was conducted using 11 baseline neuropsychological test scores from 365 CU participants in the UCSD Shiley-Marcos Alzheimer’s Disease Research Center (age M=71.93 years, SD=7.51; 55.9% women; 15.6% Hispanic/Latino/a/x/e). A discriminate function analysis was then conducted to test whether the individual neuropsychological scores predicted cluster-group membership. Cox regressions examined the risk of progression to consensus diagnosis of MCI or dementia, or to DRS score <129, by cluster group.
Results:
Cluster analysis identified 5 groups: All-Average (n=139), Low-Visuospatial (n=46), Low-Executive (n=51), Low-Memory/Language (n=83), and Low-All Domains (n=46). The discriminant function analysis using the neuropsychological measures to predict group membership into these 5 clusters correctly classified 85.2% of the participants. Subgroups had unique demographic and clinical characteristics. Relative to the All-Average group, the Low-Visuospatial (hazard ratio [HR] 2.39, 95% CI [1.03, 5.56], p=.044), Low-Memory/Language (HR 4.37, 95% CI [2.24, 8.51], p<.001), and Low-All Domains (HR 7.21, 95% CI [3.59, 14.48], p<.001) groups had greater risk of progression to MCI/dementia. The Low-Executive group was also twice as likely to progress to MCI/dementia compared to the AllAverage group, but did not statistically differ (HR 2.03, 95% CI [0.88,4.70], p=.096). A similar pattern of results was found for progression to DRS score <129, with the Low-Executive (HR 2.82, 95% CI [1.26, 6.29], p=.012), Low-Memory/Language (HR 3.70, 95% CI [1.80, 7.56], p<.001) and Low-All Domains (HR 5.79, 95% CI [2.74, 12.27], p<.001) groups at greater risk of progression to a DRS score <129 than the All-Average group. The Low-Visuospatial group was also twice as likely to progress to DRS <129 compared to the All-Average group, but did not statistically differ (HR 2.02, 95% CI [0.80, 5.06], p=.135).
Conclusions:
Our results add to a growing literature documenting heterogeneity in the earliest cognitive and pathological presentations associated with Alzheimer’s disease and related disorders. Participants with subtle memory/language, executive, and visuospatial weaknesses all declined at faster rates than the All-Average group, suggesting that there are multiple pathways and/or unique subtle cognitive decline profiles that ultimately lead to a diagnosis of MCI/dementia. These results have important implications for early identification of individuals at risk for MCI/dementia. Given that the same classification approach may not be optimal for everyone, determining profiles of subtle cognitive difficulties in CU individuals and implementing neuropsychological test batteries that assess multiple cognitive domains may be a key step towards an individualized approach to early detection and fewer missed opportunities for early intervention.
History of traumatic brain injury (TBI) is associated with increased risk of dementia, but few studies have evaluated whether TBI history alters the course of neurocognitive decline, and existing literature on this topic is limited to short follow-up and smaller samples. The primary aim of this study was to evaluate whether a history of TBI (TBI+) influences neurocognitive decline later-in-life among older adults with or without cognitive impairment [i.e., normally aging, Mild Cognitive Impairment (MCI), or dementia].
Participants and Methods:
Participants included individuals from the National Alzheimer’s Coordinating Center (NACC) who were at least 50 years old and with 3 to 6 visits (M number of visits = 4.43). Participants with any self-reported history of TBI (n = 1,467) were matched 1:1 to individuals with no reported history of TBI (TBI-) from a sample of approximately 45,000 participants using case-control matching based on age (+/- 2 years), sex, education, race, ethnicity, cognitive diagnosis [cognitively normal (CN), MCI, or all-cause dementia], etiology of cognitive impairment, functional decline (Clinical Dementia Rating Scale, CDR), number of Apolipoprotein E4 (APOE ε4) alleles, and number of annual visits (3 to 6). Mixed linear models were used to assess longitudinal neuropsychological test composites (using NACC normative data) of executive functioning/attention/speed (EFAS), language, and memory in TBI+ and TBI- participants. Interactions between TBI and demographics, APOE ε4 status, and cognitive diagnosis were also examined.
Results:
Following matching procedures, TBI+ (n=1467) and TBI- (n=1467) groups were nearly identical in age (TBI+ M = 71.59, SD = 8.49; TBI- M = 71.63, SD = 8.44), education (TBI+ M = 16.12, SD = 2.59; TBI- M = 16.10, SD = 2.52), sex (both 55% male), race (both 90% White), ethnicity (both 98% non-Hispanic), APOE ε4 alleles (both 0 = 62%, 1 = 33%, 2 = 5%), baseline cognitive diagnoses (both CN = 60%, MCI = 18%, dementia = 12%), and global CDR (TBI+ M = 0.30, SD = 0.38, TBI- M = 0.30, SD = 0.38). At baseline, groups had similar Z-scores of in EFAS (TBI+ Mefas = -0.02, SD = 1.21; TBI- Mefas = -0.04, SD = 1.27), language (TBI+ MLanguage = -0.48, SD = 0.98; TBI- MLanguage = -0.55, SD = 1.05), and memory (TBI+ MMemory = -0.45, SD = 1.28; TBI- MMemory = -0.45, SD =1.28). The course of change in neuropsychological functioning worsened longitudinally, but did not differ between TBI groups (p’s > .110). There were no significant interactions between TBI history and age, sex, education, race/ethnicity, number of APOE ε4 status, or cognitive diagnosis (all p’s > .027).
Conclusions:
In this matched case-control design, our findings suggest that a history of TBI, regardless of demographic factors, APOE ε4 status, and cognitive diagnosis, does not significantly alter the course of neurocognitive functioning later-in-life in older adults with and without cognitive impairment. Future clinicopathological longitudinal studies with well characterized TBI histories and the associated clinical course are needed to help clarify the mechanism by which TBI may increase dementia risk for some individuals, without affecting course of decline.
Sexual dimorphism in human brain structure and behavior is influenced by exposure to sex hormones during critical developmental periods. In children, cancer and cancer treatments may alter hormone activity and brain development, impacting neurocognitive functions.
Participants and Methods:
Five-year survivors of childhood cancer (N=15,560) diagnosed at <21 years from 1970 to 1999, and 3,206 siblings from the Childhood Cancer Survivor Study completed the Neurocognitive Questionnaire (NCQ), a measure of self-reported task efficiency (TE), emotion regulation (ER), Organization, and working memory (WM). We compared rates of cognitive impairment (i.e., NCQ scores >90th percentile) in survivors and same-sex siblings, and sex differences in risk factors for cognitive impairment (i.e., treatment exposures, chronic health conditions (CHCs), cancer diagnosis, age at diagnosis) using modified Poisson regressions.
Results:
Survivors were more likely to report cognitive impairment than same-sex siblings (Males: TE OR=2.3, p<.001; ER OR=1.7, p=.008; Organization OR=1.5, p=.04; WM OR=2.3, p<.001. Females: TE OR=2.6, p<.001; ER OR=1.9, p<.001; Organization OR=1.5, p=.02; WM OR=2.6, p<.001). Within survivors, females were more likely than males to report impairment in TE (OR=1.2, p=.001), ER (OR=1.5, p<.001), and WM (OR=1.2, p<.001). There were no sex differences in symptom severity in siblings (all ps>.05). Risk factors for cognitive impairment in survivors included cranial radiation dose (TE <20Gy OR=1.5, p=.008, ≥20Gy OR=2.5, p<.001; ER OR=1.5, p<.001; Organization <20 Gy OR=1.4, p<.001; < WM 20 Gy OR=1.8, p<.001, ≥20Gy OR=2.7, p<.001), presence of moderate to severe CHCs (TE 1 CHC OR=1.9, p<.001, >1 CHC OR=3.6, p<.001; ER 1 CHC OR=1.7, p<.001, >1 CHC OR=2.2, p<.001; Organization 1 CHC OR=1.5, p=.001, >1 CHC OR=2.5, p<.001; WM 1 CHC OR=1.8, p<.001, >1 CHC OR=4.1, p<.001). There were sex differences in cognitive impairment risk factors in survivors. In females, cranial radiation dose (<20 Gy TE OR=1.6, p=.02; ≥20Gy TE OR=1.4, p=.01), leukemia diagnosis (TE OR=1.4, p=.02), or diagnosis age between 3-5 years (WM OR=1.4, p=.02) conferred higher risk for cognitive impairment compared to males with the same history. Females diagnosed with Hodgkin’s lymphoma (Organization OR=0.61, p=.05) or non-Hodgkin’s lymphoma (Organization OR=0.55, p=.03) were at lower risk for cognitive impairment compared to males.
Conclusions:
We found sex-specific differences in rates of, and risk factors for, neurocognitive impairment, suggesting a sex vulnerability. Future studies examining interactions between sex hormones and treatment exposures during brain development will enable tailoring treatments follow-up interventions to ensure that quality of life is maximized.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Religion's neural underpinnings have long been a topic of speculation and debate, but an emerging neuroscience of religion is beginning to clarify which regions of the brain integrate moral, ritual, and supernatural religious beliefs with functionally adaptive responses. In my presentation, I will review evidence indicating that religious cognition involves a complex interplay among the brain regions underpinning cognitive control, social reasoning, social motivations, emotion, reinforcement, and ideological beliefs. I will then conclude my presentation by summarizing current and future research efforts and why searching for God in the brain is critical to our understanding of human behavior.
Upon conclusion of this course, learners will be able to:
1. Summarize the methods used to study the neural basis of religious belief.
An acquired brain injury (ABI) is a neurological pathology that generates a physical injury in the brain. These include cerebrovascular accidents (CVA) and traumatic brain injuries (TBI). Brain injuries can cause cognitive, emotional, and social problems, which have the potential to severely alter a person’s independence and quality of life. Loneliness, thesubjective experience of social isolation, has been shown to be the best predictor of mental health problems and poorquality of life in patients with ABI. This study aimed to explore the relationship between cognitive, emotional, and social determinants and loneliness in Puerto Ricans with ABI in the chronic phase.
Participants and Methods:
Cross-sectional, exploratory, and correlational methods were implemented. Assessments included the Frontal Systems Behavioral Scale - Spanish version (FrSBe-SP), Perth Emotional Reactivity Scale -Spanish version (PERS), Anticipated Stigma and Concealment (ASC), and the University of California Los Angeles - Loneliness Scale (UCLA-LS).
Results:
A total of seventeen participated (n=17). Twenty-nine percent of participants were female. Forty-seven percent had history of previous CVA and fifty-two percent had history of TBI. Correlational analyses suggest a positive and significant relationship between executive dysfunction (FrSBe-SP) and feelings of loneliness (UCLA-LS) (p=.601), as well as a positive and significant relationship between neuroticism-negative emotional reactivity (PERS) and feelings of loneliness (UCLA-LS) (p=.736). Correlational analysis suggests there is no significant relationship between anticipated stigma (ASC) and feelings of loneliness (UCLA-LS) (p=.282).
Conclusions:
Our findings suggest that there is a significant relationship between cognitive determinants (executive functions) and emotional determinants (neuroticism) with feelings of loneliness in people with a history of ABI. These results support the connection between executive dysfunction, the tendency to experience negative emotions, and the subjective experience of loneliness, consistent with previous studies. However, our study did not find any significant relationship between interactional determinants, such as stigma and concealment, and loneliness. Understanding the role of cognition, emotions, and social variables in reported feelings of loneliness is important for clinical neuropsychological assessment and rehabilitation interventions.
Executive function (EF) is a self-regulatory construct well-established as a predictor of long-term academic achievement and socioemotional functioning in children (Best et al., 2009; Diamond, 2013; Zelazo & Carlson, 2020). Traumatic brain injury (TBI) in childhood frequently results in EF deficits (Beauchamp & Anderson, 2013; Levin & Hanten, 2005). In comparison to adults (Okonkwo et al., 2013), there is an absence of viable blood biomarkers for pediatric TBI to assist in diagnosis and prognosis. Osteopontin (OPN), an inflammatory cytokine, has recently been identified as a putative pediatric TBI blood biomarker (Gao et al., 2020). However, more work is needed to establish OPN’s utility in predicting functional outcomes. Thus, the present study aimed to test relations between OPN measured during the first 72 hours of hospitalization and EF 6-12 months post injury among a sample of pediatric TBI patients.
Participants and Methods:
Sample consisted of 38 children (age at injury = 4.60-16.67 years, M age = 10.61 years, 65.8% male, lowest Glasgow Coma Scale [GCS] score = 3-15, M gcs= 9.97) with TBI whose parents completed the Behavior Rating Inventory of Executive Function, Second Edition (BRIEF-2; Gioia et al., 2015) 6-12 months post injury. Plasma OPN was measured at hospital admission, 24 hours after admission, 48 hours after admission, and 72 hours after admission. 7-scores for each BRIEF-2 clinical scale (Inhibit, Self-Monitor, Shift, Emotional Control, Initiate, Working Memory, Plan/Organize, Task-Monitor, Organization of Materials) and composite index (Behavior Regulation Index, Emotion Regulation Index, Cognitive Regulation Index, Global Executive Composite) were used in analyses.
Results:
Correlation analyses revealed large positive associations (rs = .50-.73, ps = <.001.039) between 48-hour OPN and all BRIEF-2 scales/indices except Initiate. OPN at 24 hours positively correlated with Task-Monitor (r = .40, p = .037). Bivariate logistic regression analyses testing whether OPN predicted at least mildly elevated BRIEF-2 t-scores (>60) did not yield significant associations. Additional supplementary analyses testing whether alternative injury markers - glial fibrillary acidic protein (GFAP), ubiquitin C-terminal Hydrolase-L1 (UCH-L1), S100 calcium binding protein B (S100B) - measured at all time points as well as lowest GCS score correlated with EF revealed the following: admission S100B positively correlated with Inhibit (r = .34, p = .045), 48-hour UCH-L1 negatively correlated with Initiate (r = -.49, p = .041) and Cognitive Regulation Index (r = -.48, p = .044), and 72-hour UCH-L1 negatively correlated with Initiate (r = -.47, p = .048).
Conclusions:
Findings showed higher OPN at 48 hours post admission was broadly related to worse parent-reported EF 6-12 months later, with 24-hour OPN also showing limited associations. Higher levels of alternative injury markers likewise showed limited associations with EF outcomes. Null logistic regression findings may be due to few participants having elevated BRIEF-2 scores. Disrupted EF development may be more noticeable after longer time periods as children age and self-regulatory demands increase. Overall, OPN was found to more consistently predict EF outcomes than GCS score and other injury markers. This could be because OPN is a marker of inflammation, which may be particularly predictive of TBI cognitive outcomes.
Assessment of medication management, an instrumental activity of daily living (IADL), is particularly important among Veterans, who are prescribed an average of 2540 prescriptions per year (Nguyen et al., 2017). The Pillbox Test (PT) is a brief, performance-based measure that was designed as an ecologically valid measure of executive functioning (EF; Zartman, Hilsabeck, Guarnaccia, & Houtz, 2013), the cognitive domain most predictive of successful medication schedule management (Suchy, Ziemnik, Niermeyer, & Brothers, 2020). However, a validation study by Logue, Marceaux, Balldin, and Hilsabeck (2015) found that EF predicted performance on the PT more so than processing speed (PS), but not the language, attention, visuospatial, and memory domains combined. Thus, this project sought to increase generalizability of the latter study by replicating and extending their investigation utilizing a larger set of neuropsychological tests.
Participants and Methods:
Participants included 176 patients in a mixed clinical sample (5.1% female, 43.2% Black/African American, 55.7% white, Mage = 70.7 years, SDage = 9.3, Medu = 12.6 years, SDedu = 2.6) who completed a comprehensive neuropsychological evaluation in a VA medical center. All participants completed the PT where they had five minutes to organize five pill bottles using a seven-day pillbox according to standardized instructions on the labels. Participants also completed some combination of 26 neuropsychological tests (i.e., participants did not complete every test as evaluations were tailored to disparate referral questions). Correlations between completed tests and number of pillbox errors were evaluated. These tests were then combined into the following six domains: language, visuospatial, working memory (WM), psychomotor/PS, memory, and EF. Hierarchical multiple regression was completed using these domains to predict pillbox errors.
Results:
Spearman’s correlation coefficients indicated that 25 tests had a weak to moderate relationship with PT total errors (rs = 0.23 -0.51); forward digit span was not significantly related (rs = 0.13). A forced-entry multiple regression was run to predict PT total errors from the six domains. The model accounted for 29% of the variance in PT performance, F(6, 169) = 11.56, p < .001. Of the domains, psychomotor/PS made the greatest contribution, f(169) = 2.73, p = .007, followed by language, f(169) = 2.41, p = .017, and WM, f(169) = 2.15, p = .033. Visuospatial performance and EF did not make significant contributions (ps>.05). Next, two hierarchical multiple regressions were run. Results indicated that EF predicted performance on the PT beyond measures of PS, AR2 = .02, p = .044, but not beyond the combination of all cognitive domains, AR2 = .00, p = .863.
Conclusions:
Results of this study partially replicated the findings of Logue et al. (2015). Namely, EF predicted PT performance beyond PS, but not other cognitive domains. However, when all predictors were entered into the same model, visuospatial performance did not significantly contribute to the prediction of pillbox errors. These results suggest that providers may benefit from investigating medication management abilities when deficits in PS, WM, and/or language are identified. Further research is needed to better understand which domains best predict PT failure.
Amyotrophic Lateral Sclerosis (ALS) is a devastating neurodegenerative disease that results in progressive decline in motor function in all patients and cognitive impairment in a subset of patients. Evidence suggests that cognitive reserve (CR) may protect against cognitive and motor decline in ALS, but less is known about the impact of specific occupational skills and requirements on clinical outcomes in ALS. We expected that a history of working jobs with more complex cognitive demands would protect against cognitive decline, while jobs that require fine and complex motor skills would protect against motor dysfunction.
Participants and Methods:
Participants were 150 ALS patients recruited from the University of Pennsylvania’s Comprehensive ALS Center. Participants underwent clinical and neuropsychological evaluations within 1 year of ALS diagnosis. Cognitive performance was measured using the Edinburgh Cognitive and Behavioral ALS Screen (ECAS), which includes ALS-Specific (e.g., verbal fluency, executive functions, language, social cognition) and NonSpecific (e.g., memory, visuospatial functions) composite scores. Motor functioning was measured using the Penn Upper Motor Neuron (UMN) scale and the ALS Functional Rating Scale (ALS-FRS). Occupational skills and requirements for each participant were assessed using data from the Occupational Information Network (O*NET) Database. O*NET data were assessed using principal components analysis, and 17 factor scores were derived representing distinct worker characteristics (n=5), occupational requirements (n=7), and worker requirements (n=5). These scores were entered as independent variables in multiple linear regression models using ECAS, UMN, and ALS-FRS scores as dependent variables covarying for education.
Results:
Preserved ECAS ALS-Specific performance was associated with jobs that involve greater reasoning abilities (ß=2.03, S.E.=0.79, p<.05), analytic skills (ß=3.08, S.E.=0.91, p<.001), and humanities knowledge (ß=1.20, S.E.=0.58, p<.05), as well as less exposure to environmental hazards (ß=-2.42, S.E.=0.76, p<.01) and fewer demands on visualperceptual (ß=-1.75, S.E.=0.73, p<.05) and technical skills (ß=-1.62, S.E.=0.63, p<.05). Preserved ECAS Non-Specific performance was associated with jobs that involve greater exposure to conflict (ß=0.82, S.E.=0.33, p<.05) and social abilities (ß=0.65, S.E.=0.29, p<.05). Jobs involving greater precision skills (ß=1.92, S.E.=0.79, p<.05) and reasoning ability (ß=2.10, S.E.=0.95, p<.05) were associated with greater disease severity on the UMN, while jobs involving more health services knowledge were associated with worse motor functioning on the ALS-FRS (ß=-1.30, S.E.=0.60, p<.05).
Conclusions:
Specific occupational skills and requirements show protective effects on cognitive functioning in ALS, while others confer risk for cognitive and motor dysfunction. Preserved cognitive functioning was linked to a history of employment in jobs requiring strong reasoning abilities, social skills, and humanities knowledge, while poorer cognitive functioning was linked to jobs involving a high risk of exposure to environmental hazards and high visuo-perceptual and technical demands. In contrast, we did not find evidence of motor reserve, as no protective effects of occupational skills and requirements were found for motor symptoms, and jobs involving greater precision skills, reasoning abilities, and health services knowledge were linked to worse motor functioning. Our findings offer new insights into how occupational history may protect against cognitive impairment or confer elevated risk for cognitive and motor dysfunction in ALS.
Existing research has demonstrated that neuropsychiatric/behavioral-psychological symptoms of dementia (BPSD) frequently contribute to worse prognosis in patients with neurodegenerative conditions (e.g., increased functional dependence, worse quality of life, greater caregiver burden, faster disease progression). BPSD are most commonly measured via the Neuropsychiatric Inventory (NPI), or its briefer, informant-rated questionnaire (NPI-Q). Despite the NPI-Q’s common use in research and practice, there is disarray in the literature concerning the NPI-Q’s latent structure and reliability, possibly related to differences in methods between studies. Also, hierarchical factor models have not been considered, even though such models are gaining favor in the psychopathology literature. Therefore, we aimed to compare different factor structures from the current literature using confirmatory factor analyses (CFAs) to help determine the best latent model of the NPI-Q.
Participants and Methods:
This sample included 20,500 individuals (57% female; 80% White, 12% Black, 8% Hispanic), with a mean age of 71 (SD = 10.41) and 15 average years of education (SD = 3.43). Individuals were included if they had completed an NPI-Q during their first visit at one of 33 Alzheimer Disease Research Centers reporting to the National Alzheimer Coordinating Center (NACC). All CFA and reliability analyses were performed with lavaan and semTools R packages, using a diagonally weighted least squares (DWLS) estimator. Eight single-level models using full or modified versions of the NPI-Q were compared, and the top three were later tested in bifactor form.
Results:
CFAs revealed all factor models of the full NPI-Q demonstrated goodness of fit across multiple indices (SRMR = 0.039-0.052, RMSEA = 0.025-0.029, CFI = 0.973-0.983, TLI = 0.9670.977). Modified forms of the NPI-Q also demonstrated goodness of fit across multiple indices (SRMR = 0.025-0.052, RMSEA = 0.0180.031, CFI = 0.976-0.993, TLI = 0.968-0.989). Top factor models later tested in bifactor form all demonstrated consistently stronger goodness of fit regardless of whether they were a full form (SRMR = 0.023-0.035, RMSEA = 0.015-0.02, CFI = 0.992-0.995, TLI = 0.985-0.991) or a modified form (SRMR = 0.023-0.042, RMSEA = 0.015-0.024, CFI = 0.985-0.995, TLI = 0.9770.992). Siafarikas and colleagues’ (2018) 3-factor model demonstrated the best fit among the full-form models, whereas Sayegh and Knight’s (2014) 4-factor model had the best fit among all single-level models, as well as among the bifactor models.
Conclusions:
Although all factor models had adequate goodness of fit, the Sayegh & Knight 4-factor model had the strongest fit among both single-level and bifactor models. Furthermore, all bifactor models had consistently stronger fit than single-level models, suggesting that BPSD are best theoretically explained by a hierarchical, non-nested framework of general and specific contributors to symptoms. These findings also inform consistent use of NPI-Q subscales.
The accurate assessment of instrumental activities of daily living (iADL) is essential for those with known or suspected Alzheimer's disease or related disorders (ADRD). This information guides diagnosis, staging, and treatment planning, and serves as a critical patient-centered outcome. Despite its importance, many iADL measures used in ADRD research and practice have not been sufficiently updated in the last 40-50 years to reflect how technology has changed daily life. For example, digital technologies are routinely used by many older adults and those with ADRD to perform iADLs (e.g., online financial management, using smartphone reminders for medications.) The purpose of the current study was to a) asses the applicability of technology-related iADL items in a clinical sample; b) evaluate whether technology-based iADLs are more difficult for those living with ADRD than their traditional counterparts; and c) test if adding technology-based iADL items changes the sensitivity and specificity of iADL measures to ADRD.
Participants and Methods:
135 clinically referred older adults (mean age 75.5 years) undergoing neuropsychological evaluation at a comprehensive multidisciplinary memory clinic were included in this study [37% with mild cognitive impairment (MCI) and 51.5% with dementia]. Collateral informants completed the Functional Activities Questionnaire (FAQ; Pfeffer, 1982) as well as 11 items created to parallel the FAQ wording that assessed technology-related iADLs such as digital financial management (i.e. online bill pay), everyday technology skills (i.e. using a smartphone; remembering a password), and other technology mediated activities (i.e. visiting internet sites; online shopping).
Results:
Care partners rated tech iADLs items as applicable for the majority of items. For example, technology skill items were applicable to 90.4% of the sample and online financial management questions were applicable for 76.4% of participants. Applicability ratings were similar across patients in their 60's and 70's, and lower in those over age 80. Care partners indicated less overall impairment on technology-related iADLs (M =1.22, SD =.88) than traditional FAQ iADLs (M =1.36, SD = .86), t(129) = 3.529, p =.001). A composite of original FAQ paperwork and bill pay items (M = 1.62, SD = 1.1) was rated as more impaired than digital financial management tasks (M = 1.30, SD = 1.09), t(122) = 4.77, p <.001). In terms of diagnostic accuracy, tech iADL items (AUC= .815, 95% CI [.731, -.890]) appeared to perform comparably to slightly better than the traditional FAQ (AUC =.788, 95% CI [.705, .874]) at separating MCI and dementia, though the difference between the two was not statistically significant in this small pilot sample.
Conclusions:
Technology is rapidly changing how older adults and those with ADRD perform a host of iADLs. This pilot study suggests broad applicability of tech iADL to the lives of those with ADRD and highlights how measurement of these skills may help identify trends in iADL habits that may help to mitigate the impact of ADRD on daily functions. Further, this data suggests the need to refine and improve upon existing iADL measures to validly capture the evolving technological landscape of those living with ADRD.
Accumulating evidence suggests that corpus callosum development is critically involved in the emergence of behavioral and cognitive skills during the first two years of life and that structural abnormalities of the corpus callosum are associated with a variety of neurodevelopmental disorders. Indeed by adulthood ∼30% of individuals with agenesis of the corpus callosum (ACC), a congenital condition resulting in a partial or fully absent corpus callosum, exhibit phenotypic features consistent with autism spectrum disorder (ASD). However, very little is known about developmental similarities and/or differences among infants with ACC and infants who develop ASD. This study describes temperament in infants with ACC during the first year of life in comparison with a neurotypical control group. Additionally, it examines the potential contribution of disrupted callosal connectivity to early expression of temperament in ASD through comparison to children with high familial likelihood of ASD.
Participants and Methods:
Longitudinal ratings of positive and negative emotionality were acquired at 6 and 12 months on the Infant Behavior Questionnaire-Revised across four groups of infants: isolated complete and partial ACC (n=104), high familial likelihood of ASD who do and do not have a confirmed ASD diagnosis (HL+ n=81, HL- n=282), and low-likelihood controls (LL- n=152).
Results:
Overall, the ACC group demonstrated blunted affect, with significantly lower positive and negative emotionality than LL controls at both timepoints. Specifically, the ACC group exhibited lower activity and approach dimensions of positive emotionality at both timepoints, with lower high-intensity pleasure at 6 months and lower vocal reactivity at 12 months. On negative emotionality subscales, the ACC group exhibited lower distress to limitations and sadness at both timepoints, as well as lower falling reactivity at 6 months. The ACC and HL groups did not differ significantly on positive emotionality at either timepoint. However, negative emotionality was lower in the ACC group than the HL- group at both timepoints and lower than the HL+ group at 12 months, with lower distress to limitations and sadness ratings than both HL groups at both timepoints.
Conclusions:
These findings highlight the importance of interhemispheric connections in facilitating active engagement and pursuit of pleasurable activities during the first year of life, as well as expression of sadness and distress to limitations. Notably, similarities between infants with ACC and infants at elevated familial risk of ASD suggest that disrupted callosal connectivity may specifically contribute to reductions in positive emotionality.
Risk factors that contribute to brain pathology and cognitive decline among older adults include demographic factors (e.g., age, educational attainment), genetic factors, health factors, and depression (Plassman et al., 2010). Variability within an individual’s performance across cognitive tasks is referred to as dispersion (Hultsch et al., 2002), which appears sensitive to subtle cognitive impairments associated with neurodegenerative pathology in older adults (Bangen et al., 2019; Kälin et al., 2014). Thaler and colleagues (2015) found that dispersion across domains of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) was a useful indicator of cognitive changes associated with cardiovascular disease and mortality. Also, research by Manning and colleagues (2021) found that elevated ratings of depression and anxiety in older adults was associated with greater dispersion across neuropsychological testing. The present study aimed to replicate findings that greater dispersion in neuropsychological performance is associated with impaired neurocognitive performance and greater self-reported depression among older adults who present for neuropsychological evaluation with cognitive concerns.
Participants and Methods:
Neuropsychological testing data was obtained from a university hospital. Chart reviews were conducted on 369 participants who met initial criteria (60 years or older with testing data from the RBANS Form A, Wechsler Test of Adult Reading, and Geriatric Depression Scale [GDS]). Retrospective analyses were conducted on a final sample of 293 participants from 60 to 94 years old (Mage = 74.41, SDage = 7.43; 179 females, 114 males). Diagnoses were used for group comparisons between cognitively intact individuals with subjective cognitive complaints (SCC, n = 49), persons with Mild Neurocognitive Disorder (mND, n =137), and persons with Major Neurocognitive Disorder (MND, n = 107).
Results:
As expected, results indicated that higher dispersion was related to lower Total RBANS Scores (r = -0.54, p < .001) and significant differences across diagnostic groupings (F(2, 289) = 29.19, p < 0.001; SCC, mND, MND) indicated that variability in performance was an indicator of greater neurocognitive impairment. Contrary to expectations, greater dispersion was very weakly associated with lower reported depressive symptomatology (r = -0.13, p = 0.03). A three-stage hierarchical linear regression was conducted with the RBANS Coefficient of Variation (CoV) as the dependent variable and three predictor variables (Age, Total RBANS, Total GDS). The regression analysis results indicated that age was not a significant predictor, but both Total RBANS and GDS Scores were. The most important predictor was Total RBANS Scores which uniquely explained 21% of the variation in dispersion.
Conclusions:
This study adds to the current literature regarding the clinical utility of dispersion in neuropsychological performance as an indicator of early and subtle neurocognitive impairment. Depressive symptom reporting was expected to help predict the degree of variability, but this factor was only weakly associated with the RBANS CoV.
Limitations of this study include its retrospective use of archival data and the restricted range on some variables of interest. Further research is needed to examine the relative utility of different measures of dispersion and why increased cognitive performance variability is related to neurocognitive impairment and decline.
As part of the Research Domain Criteria (RDoC) initiative, the NIMH seeks to improve experimental measures of cognitive and positive valence systems for use in intervention research. However, many RDoC tasks have not been psychometrically evaluated as a battery of measures. Our aim was to examine the factor structure of 7 such tasks chosen for their relevance to schizophrenia and other forms of serious mental illness. These include the n-back, Sternberg, and self-ordered pointing tasks (measures of the RDoC cognitive systems working memory construct); flanker and continuous performance tasks (measures of the RDoC cognitive systems cognitive control construct); and probabilistic learning and effort expenditure for reward tasks (measures of reward learning and reward valuation constructs).
Participants and Methods:
The sample comprised 286 cognitively healthy participants who completed novel versions of all 7 tasks via an online recruitment platform, Prolific, in the summer of 2022. The mean age of participants was 38.6 years (SD = 14.5, range 18-74), 52% identified as female, and stratified recruitment ensured an ethnoracially diverse sample. Excluding time for instructions and practice, each task lasted approximately 6 minutes. Task order was randomized. We estimated optimal scores from each task including signal detection d-prime measures for the n-back, Sternberg, and continuous performance task, mean accuracy for the flanker task, win-stay to win-shift ratio for the probabilistic learning task, and trials completed for the effort expenditure for reward task. We used parallel analysis and a scree plot to determine the number of latent factors measured by the 7 task scores. Exploratory factor analysis with oblimin (oblique) rotation was used to examine the factor loading matrix.
Results:
The scree plot and parallel analyses of the 7 task scores suggested three primary factors. The flanker and continuous performance task both strongly loaded onto the first factor, suggesting that these measures are strong indicators of cognitive control. The n-back, Sternberg, and self-ordered pointing tasks strongly loaded onto the second factor, suggesting that these measures are strong indicators of working memory. The probabilistic learning task solely loaded onto the third factor, suggesting that it is an independent indicator of reinforcement learning. Finally, the effort expenditure for reward task modestly loaded onto the second but not the first and third factors, suggesting that effort is most strongly related to working memory.
Conclusions:
Our aim was to examine the factor structure of 7 RDoC tasks. Results support the RDoC suggestion of independent cognitive control, working memory, and reinforcement learning. However, effort is a factorially complex construct that is not uniquely or even most strongly related to positive valance. Thus, there is reason to believe that the use of at least 6 of these tasks are appropriate measures of constructs such as working memory, reinforcement learning and cognitive control.
To investigate the informative value of nightmares on neurobehavioral functioning in individuals with mild traumatic brain injury (mTBI) beyond general sleep disturbance.
Participants and Methods:
A sample of 146 adults with mTBI (mean age = 45.1±16.0), recruited from a specialized concussion treatment center, underwent an assessment of neurobehavioral functioning using the Behavioral Assessment Screening Tool (BAST), self-reported habitual sleep disturbance and quality (via the Pittsburgh Sleep Quality Index; PSQI), and reported nightmare frequency in the past two weeks.
Results:
Nightmare frequency was the strongest predictor of negative affect (ß = .362, p <.001), anxiety (ß = .332, p <.001), and impulsivity (ß = .270, p <.001) after controlling for sex and age. Sleep disturbance accounted for the greatest variance in depression (ß = .493, p <.001), burden from concussion (ß = .477, p <.001), and fatigue (ß = .449, p <.001) after controlling for sex and age.
Conclusions:
Nightmares independently associate with neurobehavioral symptoms and likely have differential etiology from reported sleep disturbance. Nightmare frequency was more strongly related to positive neurobehavioral symptoms (i.e., added factors that impact functioning, e.g., anxiety), while general sleep disturbance was associated with negative neurobehavioral symptoms (i.e., factors taken away that impact functioning, e.g., lack of energy). Our findings suggest that neuropsychological evaluations of individuals with mTBI should assess for sleep disturbance and nightmare frequency as risk factors for neurobehavioral barriers to functioning.