To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Baseline assessment of cognitive performance is common practice under many concussion management protocols and is required for collegiate athletes by the NCAA. The purpose of baseline cognitive assessment is to understand an athlete’s individual uninjured cognitive performance, as opposed to using population normative data. This baseline can then serve as a reference point for recovery after concussion and can inform return-to-play decisions. However, multiple factors, including lack of effort, can contribute to misrepresentation of baseline results which raises concern for reliability during return-to-play decision-making. Measuring effort across a continuum, rather than as a dichotomous variable (good versus poor effort) may provide informative insight related to cognitive performance at baseline.
Participants and Methods:
Collegiate athletes (n = 231) completed the Immediate Post-Concussion Assessment and Cognitive Test (ImPACT) as part of their baseline pre-participation concussion evaluation. ImPACT creates composite scores of Verbal Memory, Visual Memory, Visual-Motor Speed, and Reaction Time. Baseline self-reported symptoms and total hours of sleep the night prior to testing are also collected through ImPACT. ImPACT has one embedded indicator within the program to assess effort, and research has identified an additional three embedded indicators. Athletes were also administered one stand-alone performance validity test, either the Medical Symptom Validity Test (n = 130) or the Rey Dot Counting Test (n = 101), to independently measure effort. Effort was estimated across a continuum (zero, one, two, or three or more failed effort indicators) with both stand-alone and embedded effort indicators. We evaluated the relationship between effort, symptoms, self-reported sleep, Reaction Time composite score and Visual-Motor Speed composite score using a linear regression model.
Results:
We found that 121 athletes passed all effort indicators, while 39 athletes failed only one effort indicator, 40 athletes failed two effort indicators, and 31 athletes failed three or four (three+) effort indicators. Self-reported symptoms and total hours of sleep were not related to effort, but Reaction Time and VisualMotor Speed composites were. Specifically, performance on the Visual-Motor Speed composite was significantly worse for athletes who failed two or three+ effort indicators compared to athletes who did not fail any, and performance on the Reaction Time composite was significantly worse only for athletes who failed three+ effort indicators. Additionally, athletes who failed one or more effort indicators and reported less sleep performed worse on both the Visual-Motor Speed and Reaction Time composites, compared to those who reported less sleep and did not fail any effort indicators.
Conclusions:
Athletes who failed one effort indicator did not perform significantly worse on Reaction Time and Visual-Motor Speed composites compared to those who passed all effort indicators. However, 31% of athletes failed two or more effort indicators and these athletes performed worse on cognitive tests, likely due to factors impacting their ability to put forth good effort. These results suggest that effort is more complex than a previously used dichotomous variable and highlights the importance of using several indicators of effort throughout baseline assessments. In addition, the importance of sleep should be emphasized during baseline assessments, especially when effort is questionable.
Since seminal work by Sherrington, the term interoception refers to the ability to sense modifications of internal bodily states as opposed to the ability to sense stimuli coming from outside the body itself. Despite conceptual changes regarding the afferent signals subserving this type of inner perception, the core of this definition is still valid and widely accepted. The critical contribution of internal state perception to self-regulation as well as higher-order cognitive processes has led to the development of psychometric and observational measures trying to capture individual interoceptive skills, focusing especially on the ability to orient attention to internal sensations. Nonetheless, despite growing interest in interoceptive attention (IAtt), little is known about neurofunctional correlates of our ability to redirect attention to internal sensations and consciously process them, as well as on potential objective biomarkers of IAtt performance.
Participants and Methods:
This study included 36 volunteers who were asked to complete a heart-beat counting task (HCT), a common IAtt task. During both resting-state and HCT, central electrophysiological (EEG, 32 electrodes) and cardiovascular activity (ECG, I lead) were recorded. eLORETA was used to estimate both task-related and resting-state intracortical sources of EEG signals. Statistical non-parametric mapping (SnPM) was used to draw and investigate contrast statistical maps between rest- and task-related cortical current density.
Results:
Contrast analyses comparing HCT and resting revealed higher Alpha frequency current density estimates during the task, with primary cortical seed in the right parahippocampal gyrus. Regression analyses of the relationship between IAtt scores and task-related changes in intracortical current density during HCT revealed a positive relationship for the Beta frequency bands with primary cortical seeds in the cingulate gyrus and insula.
Conclusions:
Findings add to available literature by further specifying the electrophysiological signature of interoceptive attentiveness, and suggest specific electrophysiological markers as objective measures of individual IAtt skills.
Cognitive Impairment (CI) is estimated to affect more than 16 million people, the majority of whom are 65 and older (Centers for Disease Control and Prevention, 2011). Moreover, there are about 5.8 million Americans currently living with the most common type of dementia, Alzheimer’s Disease, which is projected to increase to 13.8 million people by 2050 (Alzheimer’s Association, 2020). Clearly, the ability to detect early indicators of and risk factors for brain disease AND differentiate these from typical cognitive aging is crucial to supporting healthy aging. To date, there are few sensitive assessment tools for detecting normal and abnormal cognitive change that can be widely deployed in diverse research designs and populations. In addition, clinicians and researchers struggle to conduct assessments with some of the most vulnerable populations because of access issues (e.g., rural communities, rare disease populations), which exacerbates healthcare disparities for these groups. Remote digital assessments can help overcome these barriers by enabling repeated testing in naturalistic conditions, reducing participant burden and expense, and increasing research accessibility for under-represented populations.
This symposium will begin with an overview of the Mobile Toolbox (MTB), an app-based assessment tool and technology platform developed to address challenges in conducting longitudinal cognitive assessments over the adult lifespan. MTB enables completely remote, self-administered assessment using participants’ own smartphones with additional capabilities for study set-up and data management and analysis. Our second presentation describes the initial evidence for the reliability and validity of the eight core Mobile Toolbox Cognitive tests, as well as associations with age in a healthy population. The third presentation will describe one site’s experience using the MTB platform in a large, remote longitudinal study. The final presentation will consider the issues involved when studies utilize both in-person and remote assessment. Using the NIH Toolbox V3 Examiner version, from which several of the MTB tests were inspired, we will review the advantages and disadvantages of including remote assessments alone and in combination with face-to-face examination. To conclude, we will summarize the state of the current research and recommendations for neuropsychologists interested in using MTB in their future work.
Glioblastomas, Grade 4 astrocytomas, comprise about 60% of all astrocytomas and have a median survival rate between 14 and 16 months. The extent of resection impacts the prognosis, with an eloquent balance of preserving the patient's functional status. As preoperative imaging and intraoperative techniques improve to maximize safe operative resection, thorough neuropsychological evaluation can aid in assessing cognitive decline and quality of life pre- and post-treatment. In light of the tumors' progressive nature and potential presence in precarious brain locations, it is imperative that the functional burden of the various presentations of glioblastomas be understood. Given the limited data on cognitive presentations of glioblastomas, we present a case study describing a neuropsychological and neuroradiologic profile of a Grade 4 astrocytoma in a patient with a left temporal glioblastoma.
Participants and Methods:
The patient signed consent for clinical evaluation and research. At the time of evaluation, he was 68 years old with a master's degree and was working at multiple start-up companies. He began noticing subtle cognitive functioning changes approximately two months prior with difficulty understanding information. His challenges progressed to difficulty composing emails, word-finding issues, and some slurring and mispronunciations. He was diagnosed with a brain tumor after an emergency MRI was performed. He participated in a neuropsychological evaluation just prior to surgery. The evaluation included a battery of neuropsychological tests examining attention, processing speed, executive functioning, learning and memory, language functioning, visuospatial functioning, motor functioning, and mood.
Results:
The imaging results revealed a non-enhancing intra-axial mass in the left superior temporal lobe with surrounding edema. Also noted were rare scattered nonspecific T2 hyperintensities. The scores showed variable motor functioning and deficits within attention for complex information, executive functioning abilities (i.e., motor planning and sequencing, phonemic fluency), language functioning, visuospatial functioning, and learning and memory of information relative to his premorbid level of functioning, indicating total brain involvement consistent with imaging findings of edema.
Conclusions:
Taken together, the results of the evaluation and imaging were suggestive of a level of cognitive decline that is more than expected with normal aging. Moreover, there was a lack of evidence representative of a lateralized profile. Notably, the evaluation was conducted before resection surgery, and therefore, the patient continued to experience significant brain edema due to the tumor. Although medication may have contributed to dysfunction, particularly with motor and cognitive slowing, it is not likely that it explained his presentation entirely. As such, the evaluation results were suggestive of neurocognitive dysfunction, which was partially attributable to the tumor and edema displacing neuronal tissue. Given the potential for improvement following tumor resection and secondary decline resulting from recurrence or treatment, it is crucial to have a baseline and the ability to map out higher order functioning, including frontal and temporal lobe functioning. Ultimately, as the field continues to look toward long-term survival for patients with currently lethal brain tumors, the goal is to achieve maximum resection with minimal neurocognitive loss.
Mind-wandering is defined as a spontaneous shift of attention away from the external environment to inner thoughts. With mind-wandering being a ubiquitous phenomenon, there has been increasing interest in examining the role these spontaneous, and often unintentional, thought processes may have for metrics of cognitive and psychological health. However, much of this literature is mired with inconsistencies, potentially stemming from the use of variegated experimental methods and quantification of mind-wandering through different metrics. For example, mind-wandering has been investigated through endorsement of self-report probes embedded in tasks of sustained attention, with participants asking to endorse whether they were engaging in task-unrelated thoughts or task-related, but evaluative thoughts about the task (task-related interference). Other studies have instead focused on behavioral metrics of task performance, like omission and commission errors, the variability in response time (RTCV), and speeding or slowing prior to errors to quantify mind-wandering. In this study, employing a large sample of older adults, and implementing the novel technique of partial least squares regression, we examined the combined and simultaneous effect of different mind-wandering metrics in explaining variance in fluid cognition and psychological health in older adults.
Participants and Methods:
One hundred and fifty older adults with normal cognition or mild cognitive impairment were administered a Go/No-Go Task (GNG) with embedded mind-wandering probes, the Conners CPT-3, the NIH Toolbox-Cognition Battery, and the WHO Quality of Life Assessment Brief Version at baseline in a clinical trial examining the impact of two mind-body interventions on aging. Based on previous research, the following variables were considered behavioral measures of mind-wandering: quantity of omission and commission errors, RTCV, pre-error speeding, and post-error slowing. Percentage of self-reported task-related interference (i.e. evaluating current performance) and task-unrelated thoughts were included as self-report measures of mind-wandering. These mind-wandering measures, along with demographic variables (age, sex, and education), were regressed using Partial Least Squares Regression to determine the impact of mind-wandering measures on fluid cognition (NIHT-CB) and perceived psychological well-being (WHOQOL-BBREF). Validation tests were completed to assess model fit.
Results:
A single latent factor explained 26% of the variance in fluid cognition (p=0.0001). Higher levels of age, errors of omission on both tasks, and task-related interference were all associated with worse fluid cognition, whereas task-unrelated thoughts were associated with better fluid cognition.
A two-factor latent model explained 12% of the variance in perceived psychological well-being (p=0.0004). Age and task-unrelated thoughts were positively associated with psychological well-being. In contrast, errors of omission on both tasks, response time variability on the CPT, and task-related interference were negatively associated with perceived psychological well-being.
Conclusions:
Mind-wandering is associated with fluid cognition and perceived psychological well-being in older adults. Select behavioral measures were better than self-report measures at linking mind-wandering to fluid cognition and perceived psychological well-being. Interestingly task-unrelated thoughts, but not task-related interference, was positively associated with fluid cognition, supporting the cognitive resource-based account of mind-wandering. The result of our study provides novel insights into differential relationships between various metrics of mind-wandering and cognitive and psychological health.
Patients with early Alzheimer Disease (AD) and Mild Cognitive Impairment of the Amnestic type (MCI-A) have been reported to show large variability of tapping scores. Factors that contribute to that variability remain undetermined. This preliminary study aimed to identify predictors of finger tapping variability in older adults evaluated for a neurodegenerative memory disorder. Based on earlier research with normally functioning adults, we predicted that the number of “invalid” tapping responses (i.e. failure of the index finger to adequately lift off the tapping key once it is depressed to produce the next number on a mechanical counter) and the female gender would predict finger tapping variability, but age and educational level would not predict variability.
Participants and Methods:
This preliminary study included 4 groups of participants, comprised of 8 healthy controls (HC, 3 males; 73±7years); 12 persons with subjective memory complaints (SMC, 3 males; 69±5 years); 12 with MCI-A (7 males; 76±5 years) and 7 early AD (5 males; 75±6years). All participants were administered a modified version of the Halstead Finger Tapping Test (HFTT). Mean, range of tapping score (i.e. a measure of variability), and number of invalid taps across 7 trials in each hand were calculated. ANOVA was performed for the HFTT metrics with the main effect of group. Tukey HSD tests were used for post hoc comparisons between groups. Multiple regression analysis was performed to determine the degree to which the number of invalid tapping responses, sex, age, and educational level predicted finger tapping variability using all 4 groups.
Results:
Mean tapping score did not vary significantly across groups in the dominant [F (3, 35) = 0.633, p = 0.599] or non-dominant [F (3, 35) = 2.345, p = 0.090] hand. Range score approached a significant difference between groups in the dominant hand [F (3, 35) = 2.745, p = 0.058], with a clear significant effect of group on range score in the non-dominant hand [F (3, 35) = 4.078, p = 0.014]. Range score in the nondominant hand was significantly higher in the AD compared to SMC (p = 0.018) and HC (p = 0.024). Regression analysis revealed statistically significant findings for the dominant hand (R2 = 0.327, F (4, 34) = 4.130, p = 0.008) and for the non-dominant hand (R2 = 0.330, F (4, 34) = 4.180, p = 0.007). For both the dominant and non-dominant hands, number of invalid taps significantly predicted range score (ß = 0.453, p = 0.044, and ß = 0.498, p = 0.012, respectively). Sex, age, and education years did not predict range scores.
Conclusions:
Variability of finger tapping in patients evaluated for neurodegenerative memory disorders and aged matched controls is predicted by the number of invalid tapping responses (comprising over 30% of the variance), but not by demographic variables in this clinical sample. Neurodegenerative disorders may eliminate a sex effect.
The Coma Recovery Scale-Revised (CRS-R) is the gold standard assessment of adults with disorders of consciousness (DoC); however few studies have examined the psychometric properties of the CRS-R in pediatric populations. This study aimed to demonstrate preliminary intra-rater and inter-rater reliability of the CRS-R in children with acquired brain injury (ABI).
Participants and Methods:
Participants included 3 individuals (ages 10, 15, and 17 years) previously admitted to an inpatient pediatric neurorehabilitation unit with DoC after ABI who were followed in an outpatient brain injury clinic due to ongoing severe disability. ABI etiology included traumatic brain injury (TBI; n=2) and encephalitis (n=1). Study participation took place on average 4.6 years after injury (range 2-9). The Glasgow Outcome Scale-Extended, Pediatric Version (GOS-E Peds), a measure of outcome after pediatric brain injury, was administered as part of screening. Two participants were placed in the GOS-E Peds “lower severe disability” category (i.e., score of 6) and one was placed in the “upper severe disability” category (i.e., score of 5). The CRS-R includes 6 subscales measuring responsivity including Auditory (range 0-4), Visual (range 05), Motor (range 0-6), Oromotor/Verbal (range 03), Communication (range 0-2), and Arousal (range 0-3) with higher scores indicating higherlevel function. Subscales are totaled for a CRS-R Total score. Behaviors shown during the CRS-R are used to determine state of DoC [Vegetative State (VS), Minimally Conscious State (MCS) or emergence from a minimally conscious state (eMCS)] based on 2002 Aspen Guidelines. Participants were administered the CRS-R three consecutive times on the same day. Administrations were completed by two raters in this order: Rater 1 (1A), Rater 1 (1B) and Rater 2. Intra-rater reliability was deemed by percent agreement across the 6 subscales between Rater 1A and 1B. Inter-rater reliability was deemed by percent agreement across the 6 subscales between 1A and 2.
Results:
Mean CRS-R Total score for Rater 1A was 22 (SD=1.73, range 20-23), Rater 1B was 22 (SD=1.73, range 20-23), and Rater 2 was 21.33 (SD=2.08, range 19-23). Intra-rater reliability was 100% and inter-rater reliability was 94% across all subscales. All participants were deemed eMCS at all 3 ratings.
Conclusions:
Data from this very small sample of children suggests that the CRS-R demonstrates both intra-rater and inter-rater reliability in patients with a history of DoC after ABI. Given that all children were at the high end of the scale (eMCS), further research is needed with a larger sample of children with a range of states of DoC.
Social determinants of health (SDOH) are social conditions (e.g., employment, access to healthcare, quality schools) which are shown by a growing body of literature to impact many health outcomes, including cognition. The development of community-level measures including the Child Opportunity Index (COI) have allowed for increased understanding of the resources and conditions in neighborhoods and their impact on children’s health. Given the limited existing research on how neighborhood factors impact cognitive development, this study aimed to examine associations between neighborhood context (COI) and cognitive outcomes in children and adolescents who presented for neuropsychological evaluations.
Participants and Methods:
Participants included 4,633 youth (ages 2-22; M = 10.8 years; SD = 4.1 years; 63% Male; 33% with a medical condition involving the central nervous system [CNS]) living in the DC-VA-MD-WV Metro Area who presented to an outpatient clinic for evaluation and completed an intellectual functioning (IQ) measure (88% Weschler, 11% DAS, <1% Leiter, <1% RIAS). COI values were extracted from electronic medical records based on home address. COI values include an overall index and three domain scores in educational (educational access, quality, and outcomes), health/environment (access to healthy food, healthcare, and greenspace) and social/economic (income, employment, poverty); higher scores indicate higher opportunity. Using metro-based norms, children from all opportunity levels were represented (14% Very Low, 13% Low, 18% Moderate, 21% High, 34% Very High). Multiple regression analyses were conducted to examine main effect associations between COI and Full-Scale IQ (FSIQ), Verbal IQ (VIQ), and Non-Verbal IQ (NVIQ) and explore moderation of age, gender, and medical condition on these associations. Additional regression analyses examined these relationships for the three COI domains.
Results:
Controlling for age, gender, and medical condition, neighborhood opportunity was positively associated with cognitive function (FSIQ: ß=0.198; VIQ: ß=0.202; NVIQ: ß=0.148, p’s <0.01). Models accounted for approximately 10-14% percent of the variance in cognitive outcomes (FSIQ: F[6,4476]=180.331), Adj.R2=0.138; VIQ: F[6,4556]=161.931), Adj.R2=0.124; NVIQ: F[6,4548]=123.893), Adj.R2=0.098). Age moderated the association between overall COI and cognitive outcomes (FSIQ: ß=0.005, p=0.018; VIQ: ß=0.005, p=0.043; NVIQ: ß=0.005, p<0.01) such that the association between neighborhood opportunity and cognitive outcomes was stronger at older ages, though this was a small effect. When examining subdomains of COI, cognitive outcomes were associated with educational (FSIQ: ß=0.094; VIQ: ß=0.099; NVIQ: ß=0.078, p’s <0.01) and social/economic opportunity (FSIQ: ß=0.115; VIQ: B=0.121; NVIQ: ß=0.084, p’s <0.01) but not health/environmental opportunity (FSIQ: ß=-0.001, p=0.991; VIQ: ß=-0.008, p=0.581; NVIQ: ß=-0.008, p=0.553). Medical diagnosis moderated the association between social/economic opportunity and FSIQ; there was a stronger association between IQ and COI in youth with a medical diagnosis (ß=-0.071, p<0.05).
Conclusions:
These findings demonstrate the importance of neighborhood factors, especially education and social/economic opportunities, on cognitive development. Children living in higher opportunity neighborhoods showed higher cognitive functioning. Older age and CNS-involved medical conditions were associated with higher risk in the context of reduced neighborhood opportunities. These findings emphasize the need for advocacy and other efforts to improve community resources (e.g., access to early childhood education) to address inequities in cognitive development.
Given that African American older adults are disproportionately at risk for the development of dementia, identifications of sensitive risk and protective factors are of high importance. Subjective decline in cognition is a potentially easy to assess clinical marker, as it has been previously associated with increased risk of converting to MCI and/or dementia. Subjective decline in cognition is complex though, in that it has also been associated with psychosocial factors. Given this, and the fact that the bulk of research on subjective decline in cognition has been conducted in older white adults, research in diverse samples is needed. The present study sought to address these gaps by examining interactions between race and psychosocial risk (dysphoria) and protective (social activity) factors in the prediction of subjective cognition.
Participants and Methods:
Older white (n = 350) and African American (n = 478) participants completed questionnaires via Qualtrics Panels (m age = 65.9). Subjective decline in cognition was assessed via the Multifactorial Memory Questionnaire (MMQ). Dysphoria was assessed via the Inventory of Depression and Anxiety Symptoms-II Dysphoria subscale (IDAS). Frequency of late life social activity was assessed via a validated series of questions used by the Rush Alzheimer’s Disease Center. Race, dysphoria, late life social activity, and interactions between race and dysphoria and race and social activity were analyzed as predictors of subjective decline in cognition via linear regression.
Results:
The overall model accounted for a significant portion of the variance in subjective decline in cognition, F(6, 713) = 38.38, p < .01, with an R2 of .24. The interaction between race and dysphoria was significant, such that the relationship between dysphoria and subjective decline in cognition was stronger for older adults who are African American. Race, dysphoria, social activity, and the interaction between race and social activity were not significant predictors.
Conclusions:
While dysphoria and related negative affect variables have been previously associated with subjective cognition, interactions with race are rarely analyzed. Our results show that the relationship between dysphoria and subjective decline in cognition were stronger for African American older adults. This result is of clinical importance, as dysphoria is central to many internalizing disorders, which have been associated with subjective cognition and the development of MCI and dementia. Future research should seek to analyze drivers for this associations and if interventions for dysphoria may reduce subjective decline in cognition for African American older adults.
To investigate differences of the perceived unmet needs in a post-acute brain injury sample when referred to Resource Facilitation (RF) among various race/ethnic groups.
Participants and Methods:
The methodology utilizied within this study consisted of a retrospective chart review, which was sourced from a clinical database serving chronic outpatients in the Midwest region. The main outcome measure was the Service of Unmet Needs & Service Use (SUNSU). The sample consisted of N = 455 subjects, which included a small sample size of Hispanics (N=7). Therefore, African American and Hispanic groups were combined for a total minority sample (N=84). Clinical disorders included within the study was an ABI from either stroke, anoxic injury, ruptured aneurysm, or tumor resection surgery. Eligibility criteria included participants’ admission into a RF program, a vocational goal, and a diagnosis of a moderate to severe TBI or other ABI. Lastly, key sociodemographic features included age, race, ethnicity, education, and sex.
Results:
Significant differences were found between ethnic groups (white non-Hispanics and minority group) in terms of years of education (p=<.01). White non-Hispanics had higher education (M=13.39, SD=2.23), reported significantly more rural addresses (40.2%, p=<.01), and had private insurance coverage more frequently than the minority group (33.7%, p=<.01). The full model was statistically significant, R2=.077 = F(4,450) = 9.387, p<.0001; adjusted R2 = .069. The addition of ethnicity led to a statistically significant increase in R2 of .019, F(1,450) =9.025, p<.0005.
Conclusions:
Ethnicity was found to be a predictive factor for greater unmet needs even after controlling for insurance, employment status, and urbanicity. It is currently unknown RF’s success rate in providing culturally competent services to different racial/ethnic groups, which consider factors such as primary language spoken, immigration status, and additional ethnocultural factors that could deter accurate reporting of unmet needs by minoritized groups. Future studies should investigate barriers in referring and meeting eligibility for this program and analyze post-treatment data to determine if the impact of racial, geographic, and insurance disparities is mitigated with RF treatment.
Warren's sixgill sawshark, Pliotrema warreni, is confirmed for the first time in Namibian waters, from two specimens. One specimen was collected by fisheries observers on a vessel fishing in southern Namibian waters in March 2010. The other was found dead on a beach in central coastal Namibia, in August 2014. The West African catshark, Scyliorhinus cervigoni, is documented for the first time in northern Namibia, from a specimen recorded during surveys of chondrichthyan bycatch on a commercial bottom trawler. This extends the species' range southwards from Angola. Records of bull sharks Carcharhinus leucas are also documented, providing a better understanding of their distribution in Namibia. Several anglers have reported catching bull sharks in the Kunene River (from the riverbank on the Namibian side) and just south of the river mouth, along the Namibian coast.
We aim to highlight a unique case that required adaptation of a neuropsychological battery used as part of a pre-surgical workup for medically refractory epilepsy, to meet the needs of a culturally and linguistically-diverse patient with visual impairment.
Participants and Methods:
Comprehensive pre-surgical neuropsychological evaluation for a 34-year-old Spanish-speaking patient with a past medical history of epilepsy, hydrocephalus, and a subependymal giant cell astrocytoma resection, with subsequent complete blindness. EEG findings demonstrated abnormal left frontal dysfunction. A neuropsychological evaluation was conducted utilizing components from the Neuropsychological Screening Battery for Hispanics (NeSBHIS) as well as additional supplemental Spanish language assessments. Due to the patient’s visual impairment, visuospatial measures were unable to be utilized. Hand dynamometer was used in place of the Grooved Pegboard Test.
Results:
Results from the evaluation indicated a generally intact cognitive profile with a few observed deficits. Relative and normative weaknesses were identified on tasks of verbal learning. His initial learning of a list of orally presented words was in the Low Average range, where he demonstrated a positive though somewhat flat learning profile. His performances on short- and long-delay free recall tasks were in the Exceptionally Low range. With a recognition format, he performed within normal limits and made no false positive errors. Importantly, during the initial learning of the word list, the patient demonstrated a significant number of repetitions (13) and semantically related intrusions (6). These likely led to downstream difficulties encoding information; however, he displayed a minimal loss of information over a delay. Similarly, his immediate and delayed recall of an orally presented story fell in the Exceptionally Low range. Additional relative weaknesses were observed on tasks of working memory (Low Average range) and on a task of phonemic fluency (Below Average range). This performance was a notable contrast to his performance on tasks of semantic fluency, which ranged from the Low Average to Average range. On a task of motor functioning, grip strength performances were intact bimanually (Low Average to Average range) without a significant asymmetry between his left and right hands. Lastly, formal assessment of emotional functioning on self-report measures revealed minimal depression, minimal anxiety, and no significant quality of life concerns.
Conclusions:
Taken together, the weaknesses observed in the domains of verbal learning, working memory, and phonemic fluency, in addition to the learning profile observed during the verbal encoding task, suggest that his overall profile is indicative of dominant frontal systems dysfunction. This finding was concordant with prior EEG and MRI studies. Notably, given the patient’s visual impairment, visuospatial measures were unable to be utilized, and lateralization was unable to be fully assessed given the abbreviated battery. The neuropsychological battery used for this evaluation was based on established guidelines, and while there were limitations in administration of the present battery, it is imperative to highlight the necessity and feasibility for adaptation of protocols to best capture data in culturally-underrepresented and visually impaired populations.
Approximately half of people living with HIV (PWH) experience HIV-associated neurocognitive disorders (HAND), yet HAND often goes undiagnosed. There is an ongoing need to find efficient, cost-effective ways to screen for HAND and monitor its progression in order to intervene earlier in its course and more effectively treat it. Prior studies that analyzed brief HAND screening tools have demonstrated that certain cognitive test pairs are sensitive to HAND cross-sectionally and outperform other screening tools such as the HIV Dementia Scale (HDS). However, few studies have examined optimal tests for longitudinal screening. This study aims to identify the best cognitive test pairs for detecting cognitive decline longitudinally.
Participants and Methods:
Participants were HIV+ adults (N=132; ages 25-68; 59% men; 92% Black) from the Temple/Drexel Comprehensive NeuroHIV Center cohort. Participants were currently well treated (98% on cART, 92% with undetectable viral load, and mean current CD4 count=686). They completed comprehensive neurocognitive assessments longitudinally (328 total visits, average follow-up time=4.9 years). Eighteen participants (14% of the cohort) demonstrated significant cognitive decline, defined as a decline in global cognitive z-score of 0.5 (SD) or more. In receiver operating characteristic (ROC) analyses, tests with an area under the curve (AUC) of greater than .7 were included in subsequent test pair analyses. Further ROC analyses examined the sensitivity and specificity of each test pair in detecting significant cognitive decline. Results were compared with the predictive ability of the Modified HIV Dementia Scale (MHDS).
Results:
The following test pairs demonstrated the best balance between sensitivity and specificity in detecting global cognitive decline: Grooved Pegboard dominant hand (GPD) and category fluency (sensitivity=.89, specificity=.60, AUC=.75, p<.001), GPD and Coding (sensitivity=.76, specificity=.70, AUC=.73, p<.001), letter fluency and Trail Making Test (TMT) B (sensitivity=.82, specificity=.63, AUC=.73, p<.001), and GPD and TMT B (sensitivity=.81, specificity=.64, AUC=.73, p<.001). Change in MHDS predicted significant decline no better than chance (sensitivity=.61, specificity=.47, AUC=.53, p=.65).
Conclusions:
Several cognitive test pairs, particularly those that include GPD, are sensitive to HIV-associated cognitive change, and far more sensitive and specific than the MHDS. Cognitive test pairs can serve as valid, rapid, cost-effective screening tools for detecting cognitive change in PWH, thereby better enabling early detection and intervention. Future research should validate the present findings in other cohorts and examine the implementation of test pair screenings in HIV care settings. Most of the optimal tests identified are consistent with the well-established impact of HAND on frontal-subcortical motor and executive networks. The utility of category fluency is somewhat unexpected as it places more demands on temporal semantic networks; future research should explore the factors driving this finding, such as the potential interaction of HIV with aging and neurodegenerative disease.
Brain mapping is critical in reducing risk for cognitive morbidity in epilepsy and brain tumor surgery. Mapping using functional MRI, and extra- and intraoperative electrical stimulation, requires a high level of expertise in functional neuroanatomy but also an understanding of individual patient characteristics that can impact mapping results and post-operative outcome. Patients can vary considerably with respect to their cognitive status going into surgery. The neuroanatomy of the disease, age and developmental level, and cultural and language differences can all influence patients' performance during brain mapping and impact surgical decision making. The purpose of this session is to discuss the importance of taking a highly individualized approach to brain mapping, focusing on anatomical considerations and individual patient differences in task selection and data interpretation. We will cover language mapping in patients who speak more than one language. Practical information will be provided to help guide informed task selection through illustrative case presentations that highlight the need for individualized brain mapping.
Upon conclusion of this course, learners will be able to:
1. Discuss informed task selection based on cortical and subcortical functional neuroanatomy
2. Explain how functional maps change with normal development and factors that should be considered when interpreting results for presurgical planning
3. Assess differences between the bilingual and monolingual brain, factors that modulate the neuroanatomical representation of language in bilinguals and strategies in mapping multiple languages for surgical planning
On traditional pattern separation tasks, older adults perform worse than younger adults when identifying similar objects but perform equally well when recognizing repeated objects. When objects are superimposed on semantically related scenes, older adults are influenced by the context to a greater degree than younger adults, leading to errors when identifying similar objects. However, in everyday life, people rarely need to differentiate between two perceptually similar objects. Therefore, we developed a task using short stories to represent similar events people may experience in daily life. Our goal was to investigate the influence of context, detail-type, and age on memory performance.
Participants and Methods:
Twenty-one older and 18 younger adults listened to 20 short stories taking place in either a coffee shop or library, each paired with a unique picture (i.e., context). Participants were asked to imagine the story taking place within the picture. Approximately 20 minutes later, participants answered a yes/no question about a detail from a story superimposed on different contexts. The different context conditions were (1) the same picture from the original story, (2) a similar picture (i.e., a different library or coffee shop picture), (3) a dissimilar picture (i.e., a library picture instead of a coffee shop picture), or (4) a control using a Fourier-transform (FT) image without any spatial-context information. Questions either asked about an identical or similar detail from the story.
Results:
Correct answers were analyzed using a 4x2x2 repeated measures ANOVA including context (same, similar, dissimilar, and FT), detail type (identical and similar), and age (younger and older adults). Overall, younger adults were more accurate than older adults, F(1,37)=23.4, p<0.001. However, surprisingly, the context and detail-type made no difference in accuracy, (F’s<1.1) A similar model was used to analyze reaction times. Younger adults were faster than older adults, F(1,37)=23.4, p<0.001. Participants of both ages were faster at correctly responding to the identical detail than the similar detail, F(1,114)=62.87, p<0.001. Context also impacted reaction time, F(3,114)=7.97, p<0.001. All participants were faster while viewing same and similar contexts compared to both the dissimilar and FT contexts (t(39)’s>2.20, p’s<0.05).
Conclusions:
We did not find the kinds of age-related effects normally observed on traditional pattern separation tasks. Although younger adults performed better overall, older adults were not any worse when responding to a similar detail compared to an identical detail, which is inconsistent with performance on pattern separation tasks where older adults perform worse when identifying similar objects compared to younger adults. Additionally, older and younger adults were influenced by context in the same way. Previous studies from our laboratory demonstrated that older adults are biased toward the context when recognizing similar objects, but the context in this paradigm did not differentially influence accuracy for either older or younger adults. Potentially, this task relies on more semantic similarity rather than the perceptual similarity of objects. Semantic similarity from the short stories may incorporate more information to better orthogonalize similar memories, rendering retrieval less susceptible to interference.
Youth with attention-deficit/hyperactivity disorder (ADHD), characterized by symptoms of inattention and hyperactivity, often experience challenges with emotion regulation (ER) and/or emotional lability/negativity (ELN).1-3 Prior work has shown that difficulties with ER and ELN among young children contribute to lower academic achievement.4-6 To date, research examining associations between ADHD and academic achievement have primarily focused on the roles of inattentive symptoms and executive functioning.7-8 However, preliminary work among youth with ADHD suggests significant associations between disruptions in emotional functioning and poor academic outcomes.9-10 The current study will examine associations between ER, ELN, and specific subdomains of academic achievement (i.e., reading, spelling, math) among youth with and without ADHD.
Participants and Methods:
Forty-six youth (52% male; Mage=9.52 years; 76.1% Hispanic/Latino; 21 with ADHD) and their parents were recruited as part of an ongoing study. Parents completed the Disruptive Behavior Disorders Rating Scale11 and Emotion Regulation Checklist12 about their child. Youth completed the Wechsler Abbreviated Scale of Intelligence-II13 and three subtests [Spelling (SP), Numerical Operations (NO), Word Reading (WR)] of the Wechsler Individual Achievement Test-III.14 Univariate analysis of variance assessed differences in emotional functioning and academic achievement among youth with and without ADHD. Correlation and regression analyses were conducted to examine the association between emotional factors and the three subtests of academic achievement.
Results:
Youth with ADHD exhibited significantly higher ELN (M=30.7, SD=8.7) compared to their peers (M=23.2, SD=5.8), when controlling for child age, sex, and diagnoses of conduct disorder and/or oppositional defiant disorder [F(1,41)=8.96, p<.01, ŋp2=.18]. With respect to ER, youth with (M=24.8, SD=4.2) and without ADHD (M=25.8, SD=4.3) did not differ [F(1,41)=.51, p=.48]. Surprisingly, within this sample, ADHD diagnostic status was not significantly associated with performance on any of the academic achievement subtests [WR: F(1,41)=.29, p=.59; NO: F(1,41)=.91, p=.35; SP: F(1,41)=2.14, p=.15]. Among all youth, ER was significantly associated with WR (r=.31, p=.04) and SP (r=.35, p=.02), whereas ELN was associated with performance on NO (r=-.30, p=.04). When controlling for child age, sex, IQ, and ER within the full sample, higher ELN was associated with lower scores on the NO subtest (b=-.56, SE=.26, p=.04). The associations between higher ER and WR scores (b=1.12, SE=.51, p=.03), as well as higher ER and SP scores (b=1.47, SE=.56, p=.01), were significant when controlling for child age and sex, but not ELN and IQ (p=.73 and p=.64, respectively).
Conclusions:
As expected, youth with ADHD had higher ELN, although they did not differ from their peers in terms of ER. Results identified distinct associations between ER and higher reading/spelling performance, as well as ELN and lower math performance across all youth. Thus, findings suggest that appropriate emotional coping skills may be most important for reading and spelling, while emotional reactivity appears most salient to math performance outcomes. In particular, ELN may be a beneficial target for intervention, especially with respect to improvement in math problem-solving skills. Future work should account for executive functioning skills, expand the academic achievement domains to include fluency and more complex academic skills, and assess longitudinal pathways within a larger sample.
Reduced hearing is associated with increased risk for social, emotional, and behavioral difficulties. Studies to date have typically compared DHH children with their hearing peers without regard for unilateral hearing loss (UHL) versus bilateral hearing loss (BHL). Children with UHL are often perceived as more like their typically hearing peers than their peers with BHL. Children with UHL typically access sound and spoken language which facilitates their functioning with fewer supports (e.g., interpreters, captioning). These children, however, show cognitive, academic, and communication profiles more similar to children with BHL than typically hearing peers. They may also experience similar social, emotional, and behavioral challenges as their BHL peers. We examined social, emotional, and behavioral functioning in a clinically referred sample of children with UHL versus BHL.
Participants and Methods:
Parents of 100 children aged 2 to 17 years (M=7.12) with either UHL (n=30) or BHL (n=70) completed the Behavioral Assessment System for Children, Third Edition (BASC-3) as part of neuropsychological evaluation in a Deaf and Hard of Hearing Program within a tertiary pediatric hospital. BASC-3 scores based on General Combined norms were compared to an expected distribution of typically developing hearing children using non-parametric one-sample tests. Profiles of scores for children with UHL and BHL were examined in a repeated measures MANOVA.
Results:
The groups of children with UHL and BHL showed similar age, gender, race, ethnicity, and Area Deprivation Index compositions. Eighty four percent of BHL children communicated with spoken language, and 100% of UHL children communicated with spoken language (p=.02). There were similar rates of comorbid diagnoses for ADHD (20%), Anxiety/Depression (18%), Autism Spectrum Disorder (8%), and Intellectual Disability/Global Developmental Delay (9%). However, children with BHL tended to be at greater risk for Language Disorders (50%) than those with UHL (30%, = 3.41 p=.065). Together, children with hearing loss showed significantly higher scores on the BASC-3 Hyperactivity, Aggression, Attention Problems, Atypicality, and Withdrawal clinical scales than expected (One-Sample Kolmogorov-Smirnov Test; p<.01). Profile analysis showed that children with any type of hearing loss had a varied pattern of scores across scales (F(7,686)=4.33, p<.01), with highest scores on Hyperactivity and Attention Problems scales and lowest scores on Somatization. Scale profiles did not differ, however, between UHL and BHLgroups (p=.127).
Conclusions:
Children with UHL have access to auditory input, typically enabling early language development more like their hearing peers compared to children with BHL. In turn, these children may be overlooked more so than their BHL peers. However, the likelihood of social, emotional, and behavioral difficulties is similar between the two groups of children with hearing loss, whether that is unilateral or bilateral. Our study showed both groups of children had similar profiles across BASC-3 scales with elevations relative to norms. Measuring these everyday functions in children with hearing loss is important for early detection of risks to promote early intervention.
Survivors of childhood brain tumor are historically thought to perform worse on measures of executive functioning, including cognitive flexibility (CF; e.g., set-shifting), when compared to their peers. Commonly utilized measures, such as subtests from the Delis-Kaplan Executive Function System (D-KEFS), have baseline conditions that attempt to measure performances independent of but critical for CF tasks (e.g., motor speed on trail making, letter fluency on verbal fluency). However, in research, conditions measuring CF are often included in analyses without accounting for these important baseline conditions. The aim of the current study is to explore differences in CF performance between survivors and their healthy peers when controlling for baseline conditions. The variance explained by each baseline condition on CF condition performance in survivors is also explored.
Participants and Methods:
A sample of 107 long-term survivors of childhood brain tumor (Mage=21.81, SD=5.99, 50.5% female) and 142 healthy controls (Mage= 23.25, SD=6.61, 61.3% female) were administered the Trail Making Test (TMT), Color-Word Interference (CWI), and Verbal Fluency (VF) subtests from the D-KEFS. For the TMT, baseline conditions include visually scanning for a target, motor speed, and letter and number sequencing. For the CWI subtest, baseline conditions include rapid color naming, word reading, and reading words in a different colored ink. On the VF subtest, baseline conditions include rapidly naming words with a specific letter and from a specific category. An analysis of covariance was conducted for each subtest to determine if groups differed in performance on the CF condition (i.e., Number-Letter Switching, Inhibition/Switching, Category Switching Accuracy) when controlling for baseline conditions. In survivors only, linear regressions investigated the amount of variance explained by each baseline condition on the CF conditions of each subtest.
Results:
Groups did not differ in CF performance of each subtest when controlling for baseline conditions (ps>.10). Across subtests, baseline conditions significantly predicted CF performance in survivors. On the TMT, Letter Sequencing (p=.003, unique-R2=.05), but not Visual Scanning, Number Sequencing, or Motor Speed, was a significant predictor of Number-Letter Sequencing performance (p<.001, R2=.50). On the CWI subtest, Word Reading (p<.001, unique-R2=.09) and Inhibition (p<.001, unique-R2=.05), but not Color Naming, were significant predictors of Inhibition/Switching performance (p<.001, R2=.67). On the VF subtest, Letter Fluency (p=.009, unique-R2=.06) and Category Fluency (p<.001, unique-R2=.08) were significant predictors of Category Switching Accuracy performance (p<.001, R2=.37).
Conclusions:
Findings suggest that CF may not differ between survivors and their healthy peers, but that other factors of executive functioning, such as processing speed, drive performance differences on measures of CF. As these tasks rely heavily on speed, survivors may be slower than their healthy counterparts, but may not perform worse on set-shifting. In addition, these results highlight the importance of controlling for lower-order processes in analyses to help isolate CF performance and more accurately characterize potential differences between groups. While replication of findings in survivors and other clinical groups (e.g., congenital heart disease, traumatic brain injury) is still needed, this work can help inform which processes are most important to account for, which is not yet established.
Huntington’s disease (HD) is a neurodegenerative disease characterised by motor, psychiatric and cognitive decline. Currently, no treatments have been identified in HD for slowing down cognitive decline or improving cognitive function. We are interested in identifying potentially modifiable factors in HD that can be targeted to improve or maintain cognitive function. Sleep and circadian disruption stand out as possible modifiable targets because sleep and circadian symptoms are common in HD, and such disruptions are known to impact cognition in the general population. Despite some emerging evidence that sleep quality correlates with cognition in manifest HD, whether these same relationships exist in the premanifest period is unknown. Further, whether circadian rhythms relate to cognition in premanifest HD remains open. Therefore, we aimed to determine whether sleep and circadian parameters relate to cognitive performance in premanifest HD.
Participants and Methods:
To date, we have recruited 27 premanifest HD participants to a two-week remote sleep study. During the study, participants wore an Actiwatch-2 and completed a sleep diary for 14 consecutive days to assess their sleep and rest-activity patterns. Participants also completed online sleep and mood questionnaires and a cognitive assessment using videoconference. We calculated Pearson correlations to examine whether cognitive performance relates to subjective sleep quality, objective sleep parameters and circadian rest-activity rhythms. Thus far, we have analysed data from 15 female participants with premanifest HD (Mage = 43.20, SD = 11.58).
Results:
Preliminary results indicate that measures of subjective sleep quality, insomnia severity, daytime sleepiness, and fatigue severity in premanifest HD do not correlate with cognitive performance. Increases in objectively measured sleep efficiency, however, strongly correlated with better performance on the Hopkins-Verbal Learning Test-Revised (HVLT-R) immediate (r = 0.562, p < 0.05) and delayed recall trials (r = 0.597, p < 0.05) and the Trail Making Test Part B (TMT-B; r = 0.550, p < 0.05). More time spent awake (i.e., wake after sleep onset) was strongly linked to reduced performance on the TMT-B (r = -0.542, p < 0.05) and Symbol Digit Modalities Test (r = -0.556, p < 0.05). Further, increases in total sleep time were associated with better performance on the HVLT-R immediate (r = 0.682, p < 0.05) and delayed recall trial (r = 0.616, p < 0.05). For our circadian parameters, less fragmented day-today rest-activity rhythms (i.e., higher intra-daily variability) strongly correlated with higher scores on the HVLT-R immediate (r = 0.768, p < 0.001) delayed recall trials (r = 0.7276, p < 0.05) and TMT-B (r = 0.516, p < 0.05), whereas consistent and stable day-to-day rest-activity rhythms (i.e., higher inter-daily stability) was associated with poorer performance on ERT (r = -0.587, p < 0.05).
Conclusions:
Preliminary results suggest that fragmented sleep, sleep inefficiency, reduced total sleep time, rest-activity rhythm stability and fragmentation relate to poorer cognitive performance in people with premanifest HD. Should analysis of our whole sample confirm these preliminary findings, targeting sleep in HD (e.g., through sleep hygiene and/or psychoeducation) may be a useful strategy to improve or maintain cognition.
Motor imagery is defined as a dynamic state during which a subject mentally simulates a given action without overt movements. Our aim was to use near-infrared spectroscopy to investigate differences in cerebral hemodynamic during motor imagery of self-feeding with chopsticks using the dominant or non-dominant hand.
Participants and Methods:
Twenty healthy right-handed people participated in this study. The motor imagery task involved eating sliced cucumber pickles using chopsticks with the dominant (right) or non-dominant (left) hand. Activation of regions of interest (pre-supplementary motor area, supplementary motor area, pre-motor area, pre-frontal cortex, and sensorimotor cortex was assessed.
Results:
Motor imagery vividness of the dominant hand tended to be significantly higher than that of the non-dominant hand. The time of peak oxygenated hemoglobin was significantly earlier in the right pre-frontal cortex than in the supplementary motor area and left pre-motor area. Hemodynamic correlations were detected in more regions of interest during dominant-hand motor imagery than during non-dominant-hand motor imagery.
Conclusions:
Hemodynamic might be affected by differences in motor imagery vividness caused by variations in motor manipulation.