We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Previous research established that white matter hyperintensities (WMH), a biomarker of small vessel cerebrovascular disease, are strong predictors of cognitive function in older adults and associated with clinical presentation of Alzheimer’s disease (AD), particularly when distributed in posterior brain regions. Secondary prevention clinical trials, such as the Anti-Amyloid Treatment in Asymptomatic Alzheimer’s (A4) study, target amyloid accumulation in asymptomatic amyloid positive individuals, but it is unclear the extent to which small vessel cerebrovascular disease accounts for performance on the primary cognitive outcomes in these trials. The purpose of this study was to examine the relationship between regional WMH volume and performance on the Preclinical Alzheimer Cognitive Composite (PACC) among participants screened for participation in the A4 trial. We also determined whether the association between WMH and cognition is moderated by amyloid positivity status.
Participants and Methods:
We assessed demographic, amyloid PET status, cognitive screening, and raw MRI data for participants in the A4 trial and quantitated regional (by cerebral lobe) WMH volumes from T2-weighted FLAIR in amyloid positive and amyloid negative participants at screening. Cognition was assessed using PACC scores, a z-score sum of four cognitive tests: The Mini-Mental State Examination (MMSE), the Free and Cued Selective Reminding Test, Logical Memory Test, and Digit Symbol Substitution Test. We included 1329 amyloid positive and 329 amyloid negative individuals (981 women; mean age=71.79 years; mean education=16.58 years) at the time of the analysis. The sample included Latinx (n=50; 3%), non-Latinx (n=1590; 95.9%), or unspecified ethnicity (n=18; 1.1%) individuals who identified as American Indian/Alaskan Native (n=7; 0.4%), Asian (n=38; 2.3%), Black/African American (n=41; 2.5%), White (n=1551 ; 93.5%), or unspecified (n=21; 1.3%) race. We first examined the associations of total and regional WMH volume and amyloid positivity on PACC scores (the primary cognitive outcome measure for A4) using separate general linear models and then determined whether amyloid positivity status and regional WMH statistically interacted for those WMH regions that showed significant main effects.
Results:
Both increased WMH, in the frontal and parietal lobes particularly, and amyloid positivity were independently associated with poorer performance on the PACC, with similar magnitude. In subsequent models, WMH volume did not interact with amyloid positivity status on PACC scores.
Conclusions:
Regionally distributed WMH are independently associated with cognitive functioning in typical participants enrolled in a secondary prevention clinical trial for AD. These effects are of similar magnitude to the effects of amyloid positivity on cognition, highlighting the extent to which small vessel cerebrovascular disease potentially drives AD-related cognitive profiles. Measures of small vessel cerebrovascular disease should be considered explicitly when evaluating outcomes in trials, both as potential effect modifiers and as possible targets for intervention or prevention. The findings from this study cannot be generalized widely, as the participants are not representative of the overall population.
Researchers are increasingly finding that the presence of neuronal surface antibodies (NSAb) may account for a larger percentage of outpatient epilepsy cases than previously thought (Elisak et al., 2018; Brenner et al., 2013). However, systematic NSAb screening is not included in standard epilepsy care (Kambadja et al., 2022). The Montreal Cognitive Assessment (MoCA; Nasreddine, 2005) is one of the most commonly used screening tools among physicians (Judge et al., 2019) across various neurological conditions, and has previously been validated in populations with autoimmune encephalitis (Hebert et al., 2018). Because patients with NSAb associated epilepsy often present with cognitive dysfunction (Greco et al., 2006), a MoCA is often administered as part of standard clinical care. The present analysis compared MoCA performance profiles in epilepsy patients with and without the presence of serum NSAbs. We explored what specific cognitive profile, as defined by the MoCA, may predict NSAb positivity.
Participants and Methods:
Forty-eight epilepsy patients were enrolled through an outpatient epilepsy clinic or during non-intensive or elective hospital stays. Participants were eligible if they met one of three diagnostic categories: focal epilepsy of unknown cause (n = 33), lesional focal epilepsy (n = 5), or generalized epilepsy (n = 4). All participants signed consent, underwent a comprehensive interview, which included MoCA testing, and serum NSAb testing paralleling standard clinical practice. Mann-U Whitney tests were run to compare overall MoCA and subgroup domain performance between groups.
Results:
Six patients (13%), all with focal epilepsy of unknown cause, had positive NSAb panels (LGI1: n = 3; GAD65: n = 2; CASPR2: n = 1). There was no significant difference in overall MoCA scores between participants with focal epilepsy of unknown cause who were antibody positive versus negative, and antibody positive versus antibody negative lesional or generalized epilepsy. However, when analyzing by MoCA subgroup, we found that antibody positive patients performed significantly worse on delayed recall than antibody negative patients with focal epilepsy of unknown cause (Mdn = 1.5 vs 3), U(Nantibodynegative=27, Nantibodypositive=6) = 69.00, p = .02. There was no significant difference in other MoCA cognitive domain tests, and delayed recall scores did not significantly differ between antibody positive patients and those with lesional focal and generalized epilepsy.
Conclusions:
These preliminary findings suggest that episodic memory impairment, as measured by delayed recall on the MoCA, may predict NSAb antibody positivity among patients with focal epilepsy of unknown cause. This may relate to specific predilection of the hippocampal regions in antibody-mediated epileptogenesis and pathology.
The present study explored how individual differences and development of gray matter architecture in inferior frontal gyri (IFG), anterior cingulate (ACC), and inferior parietal lobe (IPL) relate to development of response inhibition as measured by both the Stop Signal Task (SST) and the Go/No-Go (GNG) task in a longitudinal sample of healthy adolescents and young adults. Reliability of behavioral and neural measures was also explored.
Participants and Methods:
A total of 145 individuals contributed data from the second through fifth timepoints of an accelerated longitudinal study focused on adolescent brain and behavioral development at the University of Minnesota. At baseline, participants were 9 to 23 years of age and were typically-developing. Assessment waves were spaced approximately 2 years apart. Behavioral measures of response inhibition collected at each assessment included GNG Commission Errors (CE) and the SST Stop Signal Reaction Time (SSRT). Structural T1 MRI scans were collected on a Siemens 3 T Tim Trio and processed with the longitudinal Freesurfer 6.0 pipeline to yield cortical thickness (CT) and surface area values. Regions of interest based on the Desikan-Killiany-Tourville atlas included IFG regions (pars opercularis (PO) and pars triangularis (PT)), ACC and IPL. The cuneus and global brain measures were evaluated as control regions. Retest stability of all measures was calculated using the psych package in R. Mixed linear effects modeling using the lme4 R package identified whether age-based trajectories for SSRTs and GNG CEs best fit linear, quadratic, or inverse curve. Then, disaggregated between- and within-subjects effects of regional cortical architecture measures were added to longitudinal behavioral models to identify individual differences and developmental effects, respectively.
Results:
Both response inhibition metrics demonstrated fair reliability and were best fit by an inverse age trajectory. Neural measures demonstrated excellent retest stability (all ICCs > 0.834). Age-based analyses of regional CT identified heterogeneous patterns of development, including linear trajectories for ACC and inverse age trajectories for bilateral PT. Individuals with thinner left PO showed worse performance on both response inhibition tasks. SSRTs were related to individual differences in right PO thickness and surface area. A developmental pattern was observed for right PT cortical thickness, where thinning over time was related to better GNG performance. Lower surface area of the right PT was related to worse GNG performance. No individual differences or developmental patterns were observed for the ACC, IPL, cuneus, or global metrics.
Conclusions:
This study examined the adolescent development of response inhibition and its association with cortical architecture in the IFG, ACC and IPL. Separate response inhibition tasks demonstrated similar developmental patterns with steepest improvements in early adolescence and relationships with left PO thickness, but each measure had unique relationships with other IFG regions. This study indicates that a region of the IFG, the par opercularis, relates to both individual difference and developmental change in response inhibition. These patterns suggest brain-behavior association that could be further explored in functional imaging studies and that may index, in vulnerable individuals, risk for psychopathology.
Extant literature suggests significant heterogeneity in recovery trajectories after experiencing a moderate to severe traumatic brain injury (TBI) during childhood (Moran et al., 2016). The Cognitive and Linguistic Scale (CALS) is a promising non-norm-referenced measure designed for serial monitoring within an inpatient rehabilitation setting that may optimize prediction of acute recovery and long-term neuropsychological functioning. To date, the CALS has primarily been examined in the context of injury characteristics such as severity and etiology (e.g., Slomine et al., 2016), and it is unclear what non-injury factors may be relevant to consider. Using archival data gathered from an inpatient pediatric neurorehabilitation program, this study examined associations between the CALS and select sociodemographic factors to better inform the clinical utility of the measure.
Participants and Methods:
Participants included 56 youth (46% BIPOC, 66% male) aged 2-17 years (M = 12.40, SD = 3.99) who were admitted for moderate to severe TBI to an inpatient rehabilitation program at a regional tertiary care children’s hospital. Data extracted from medical records included demographic information (i.e., age at injury, sex, ethnoracial identity, address, initial Glasgow Coma Scale (GCS) rating, CALS at admission, and full-scale IQ (FSIQ) at discharge. GCS was used as a proxy for injury severity. Residential addresses were geocoded and area-level median income was used as a proxy for familial socioeconomic status (SES). A multiple regression model was utilized to parse the individual contributions of demographic variables) on initial CALS performance while accounting for injury severity. Parallel regression models were used to determine whether patient characteristics moderate the association between initial CALS performance and cognitive functioning at discharge.
Results:
Preliminary analyses demonstrated that there were no significant associations between GCS and demographic variables, ps > .05. Patient age at injury was significantly associated with CALS total score at admission above and beyond injury severity and other demographic characteristics, t (31) = 2.55, p = .016, such that older age was associated with higher initial CALS scores. Results of moderator analyses between CALS and patient characteristics showed a significant main effect of injury severity, such that higher GCS was associated with higher FSIQ at discharge across models, ps < .05. No significant interactions were identified.
Conclusions:
These findings provide additional evidence for the generalizability of the CALS and further characterize its associations with non-injury factors, which is important for better understanding aspects that contribute to recovery trajectories and outcomes after moderate to severe TBI. Given the longstanding challenges in regard to the validity of neuropsychological assessment for diverse groups, it is critical to explicitly examine cultural context when considering the clinical utility of a measure. A limitation of the current study is the utilization of broad demographic information due to limited availability of sociocultural data. Future research should examine more granular and culturally-specific variables that may impact CALS performance (e.g., educational attainment, linguistic considerations), beyond using broad-based demographic data as a proxy for sociocultural factors. Another important next step is to utilize serial administrations of the CALS to examine the impact of sociocultural factors on recovery trajectories following TBI.
Cognitive reserve has been linked to functional ability, and depression has been shown to be associated with more functional impairment in older adults. While cognitive reserve and depression are associated and have each been shown to impact functional impairment, the independent impact of cognitive reserve on functional ability after accounting for depressive symptoms has not been explored. For the purpose of this study, years of education served as a proxy for cognitive reserve, which is consistent with the literature. It was predicted that higher levels of education would be associated with better functional ability regardless of age and severity of depressive symptoms.
Participants and Methods:
Participants (ages 55 to 90) were drawn from the Alzheimer’s Disease Neuroimaging Initiative (N=3407); participants with major depression were not included. Subsyndromal depressive symptoms were measured using the Geriatric Depression Scale (GDS < 6) and functional impairment was assessed using the Functional Activities questionnaire. A three-stage hierarchical regression was conducted with functional ability as the dependent variable.
Results:
Age, entered at stage one of the regression model, was a significant predictor (F(1,1427) =49.75, p<.001) and accounted for 3.4% of the variance in functional ability. Adding depressive symptoms to the regression model led to a significant increase in variance explained (F(1,1426) = 64.57, p<.001), accounting for an additional 4.2% of the variance in functional ability. Adding years of education to the regression model explained an additional 1.4% of variance in functional ability and this increase in variance explained was significant (F(1,1425) = 22.53, p<.001).
Conclusions:
Cognitive reserve (operationalized as higher levels of education) was associated with higher functional ability even after accounting for age and depressive symptoms.
Face-to-face administration is the “gold standard” for both research and clinical cognitive assessments. However, many factors may impede or prevent face-to-face assessments, including distance to clinic, limited mobility, eyesight, or transportation. The COVID19 pandemic further widened gaps in access to care and clinical research participation. Alternatives to face-to-face assessments may provide an opportunity to alleviate the burden caused by both the COVID-19 pandemic and longer standing social inequities. The objectives of this study were to develop and assess the feasibility of a telephone- and video-administered version of the Uniform Data Set (UDS) v3 cognitive batteries for use by NIH-funded Alzheimer’s Disease Research Centers (ADRCs) and other research programs.
Participants and Methods:
Ninety-three individuals (M age: 72.8 years; education: 15.6 years; 72% female; 84% White) enrolled in our ADRC were included. Their most recent adjudicated cognitive status was normal cognition (N=44), MCI (N=35), mild dementia (N=11) or other (N=3). They completed portions of the UDSv3 cognitive battery, plus the RAVLT, either by telephone or video-format within approximately 6 months (M:151 days) of their annual in-person visit, where they completed the same in-person cognitive assessments. Some measures were substituted (Oral Trails for TMT; Blind MoCA for MoCA) to allow for phone administration. Participants also answered questions about the pleasantness, difficulty level, and preference for administration mode. Cognitive testers provided ratings of perceived validity of the assessment. Participants’ cognitive status was adjudicated by a group of cognitive experts blinded to most recent inperson cognitive status.
Results:
When results from video and phone modalities were combined, the remote assessments were rated as pleasant as the inperson assessment by 74% of participants. 75% rated the level of difficulty completing the remote cognitive assessment the same as the in-person testing. Overall perceived validity of the testing session, determined by cognitive assessors (video = 92%; phone = 87.5%), was good. There was generally good concordance between test scores obtained remotely and in-person (r = .3 -.8; p < .05), regardless of whether they were administered by phone or video, though individual test correlations differed slightly by mode. Substituted measures also generally correlated well, with the exception of TMT-A and OTMT-A (p > .05). Agreement between adjudicated cognitive status obtained remotely and cognitive status based on in-person data was generally high (78%), with slightly better concordance between video/in-person (82%) vs phone/in-person (76%).
Conclusions:
This pilot study provided support for the use of telephone- and video-administered cognitive assessments using the UDSv3 among individuals with normal cognitive function and some degree of cognitive impairment. Participants found the experience similarly pleasant and no more difficult than inperson assessment. Test scores obtained remotely correlated well with those obtained in person, with some variability across individual tests. Adjudication of cognitive status did not differ significantly whether it was based on data obtained remotely or in-person. The study was limited by its’ small sample size, large test-retest window, and lack of randomization to test-modality order. Current efforts are underway to more fully validate this battery of tests for remote assessment. Funded by: P30 AG072947 & P30 AG049638-05S1
Adolescent and young adult survivors of pediatric brain tumors often live with long-term neuropsychological deficits, which have been found to be related to functional and structural brain changes related to the presence of the tumor itself as well as treatments such as radiation therapy. The importance of brain networks has become a central focus of research over recent decades across neurological populations. Graph theory is one way of analyzing network properties that can describe the integration, segregation, and other aspects of network organization. The existing literature using graph theory with survivors of brain tumor is small and inconsistent; therefore, more work is needed, particularly in survivors of pediatric brain tumors. The present study used graph theory to determine whether functional network properties in this population differ from healthy controls; whether graph metrics relate to core cognitive skills: attention, working memory, and processing speed; and whether they relate to a cumulative measure of neurological risk.
Participants and Methods:
31 survivors and 31 matched controls completed neuropsychological testing including measures of attention, working memory, and processing speed. They also underwent resting state functional magnetic resonance imaging. Resting state data were preprocessed and spatially constrained independent component analysis was completed to construct connectivity matrices. Finally, graph metrics were calculated utilizing an area under the curve method, including global efficiency, clustering coefficient, betweenness centrality, and small-worldness. Group differences and associations between graph metrics, cognitive outcomes, and neurological risk were analyzed using SPSS version 28.0.
Results:
Results revealed a significant difference such that brain tumor survivors exhibited less small-world properties in their functional brain networks. This was found to be related to working memory, such that less smallworldness in the network was related to poorer performance. There were no significant relationships with neurological risk, but there were nonsignificant correlations of small-moderate effect size such that lower global efficiency and clustering coefficient were associated with greater neurological risk. Comparisons to structural network analysis from a similar sample and additional post-hoc analyses are also discussed.
Conclusions:
These findings reveal that survivors of pediatric brain tumor indeed display significant differences in functional brain networks that are quantifiable by graph theory. It is also possible that, with further work, we might better understand how metrics such as smallworldness can be used to predict long-term cognitive outcomes and functional independence in adulthood.
Smartphone-based cognitive assessments can provide unique information about cognition that is difficult or impossible with traditional cognitive assessments. Using high-frequency measurement “burst” designs, we have shown that older adults are capable and willing to participate in smartphone-based research, that this method dramatically improves between-subject reliability compared to traditional methods and demonstrates extraordinary test-retest reliabilities, and that high-frequency measurement can reveal time of day effects that are increased in those with elevated Alzheimer’s disease biomarkers. In this symposium session, we will provide an overview of our current work in older adults at risk for AD and highlight new analyses on the interaction between day to day variability in sleep and cognition. We will also cover our approach for measuring smartphone latencies, a critical aspect of bring-your-own-device (BYOD) studies.
Participants and Methods:
The Ambulatory Research in Cognition (ARC) smartphone application for iOS and Android administers custom-designed tests of associate memory, processing speed, and spatial working memory. ARC uses a measurement burst design in which very brief (typically 60s or less) tests are completed at random times several times per day for up to one week. Measurement burst designs rely on principles from ecological momentary assessment, and can be described with a simple formula: 1. Test often and everywhere, 2. Keep assessments brief, and 3. Combine the data across sessions to increase reliability. At the Knight Alzheimer’s Disease Research Center at Washington University in St Louis, we have enrolled over 400 participants (ages 60-99 years) at risk for AD in ARC studies. These participants are comprehensively assessed with traditional cognitive tests, clinical examinations, neuroimaging, and fluid biomarkers. ARC also assesses sleep with the Pittsburgh Sleep Quality Index that captures essential sleep parameters, which are assessed daily during each 7-day measurement burst. Analyses of sleep and cognition focused on parameters including total sleep time, number of awakenings, sleep quality ratings, and an extremes analysis comparing cognition after nights with more sleep and after nights with less sleep.
Results:
Overall, participants reporting less total sleep time and more awakenings had lower memory and processing speed scores. This remained significant after modeling covariates including age, self-reported gender, education, and APOE ε4 status. Compared to nights with the most sleep, memory was worse after the nights with the poorest sleep.
Conclusions:
When considering AD biomarkers in these analyses, participants with elevated AD biomarkers, including neurofilament light chain (NfL) and phosphorylated-tau181 (p-tau181), demonstrated more impacts of poor sleep on cogntion, such that the nights with the least sleep tended to impact cognition more than in those with normal biomarker levels, suggesting an important role for sleep in maintaining cognition in preclinical and early symptomatic AD. Interestingly, self-reported sleep quality was not associated with ARC cognitive tests, consistent with studies emphasizing the need for more quantitative assessments of sleep quality. In addition to these sleep data, we will review publications using the ARC platform, including a recently accepted manuscript in JINS (Nicosia et al., 2022).
Previous studies have found differences between monolingual and bilingual athletes on ImPACT, the most widely used sport-related concussion (SRC) assessment measure. Most recently, results suggest that monolingual English-Speaking athletes outperformed bilingual English- and Spanish-speaking athletes on Visual Motor Speed and Reaction Time composites. Before further investigation of these differences can occur, measurement invariance of ImPACT must be established to ensure that differences are not attributable to measurement error. The current study aimed to 1) replicate a recently identified four-factor model using cognitive subtest scores of ImPACT on baseline assessments in monolingual English-Speaking athletes and bilingual English- and Spanish-speaking athletes and 2) to establish measurement invariance across groups.
Participants and Methods:
Participants included high school athletes who were administered the ImPACT as part of their standard pre-season athletic training protocol in English. Participants were excluded if they had a self-reported history of concussion, Autism, ADHD, learning disability or treatment history of epilepsy/seizures, brain surgery, meningitis, psychiatric disorders, or substance/alcohol use. The final sample included 7,948 monolingual English-speaking athletes and 7,938 bilingual English- and Spanish-speaking athletes with valid baseline assessments. Language variables were based on self-report. As the number of monolingual athletes was substantially larger than the number of bilingual athletes, monolingual athletes were randomly selected from a larger sample to match the bilingual athletes on age, sex, and sport. Confirmatory factor analysis (CFA) was used to test competing models, including one-factor, two-factor, and three-factor models to determine if a recently identified four-factor model (Visual Memory, Visual Reaction Time, Verbal Memory, Working Memory) provided the best fit of the data. Eighteen subtest scores from ImPACT were used in the CFAs. Through increasingly restrictive multigroup CFAs (MGCFA), configural, metric, scalar, and residual levels of invariance were assessed by language group.
Results:
CFA indicated that the four-factor model provided the best fit in the monolingual and bilingual samples compared to competing models. However, some goodness-of-fit-statistics were below recommended cutoffs, and thus, post-hoc model modifications were made on a theoretical basis and by examination of modification indices. The modified four-factor model had adequate to superior fit and met criteria for all goodness-of-fit indices and was retained as the configural model to test measurement invariance across language groups. MGCFA revealed that residual invariance, the strictest level of invariance, was achieved across groups.
Conclusions:
This study provides support for a modified four-factor model as estimating the latent structure of ImPACT cognitive scores in monolingual English-speaking and bilingual English- and Spanish-speaking high school athletes at baseline assessment. Results further suggest that differences between monolingual English-speaking and bilingual English- and Spanish-speaking athletes reported in prior ImPACT studies are not caused by measurement error. The reason for these differences remains unclear but are consistent with other studies suggesting monolingual advantages. Given the increase in bilingual individuals in the United States, and among high school athletics, future research should investigate other sources of error such as item bias and predictive validity to further understand if group differences reflect real differences between these athletes.
Appropriate adjustments to normative data for neuropsychological (NP) tests are imperative for their equitable use in brain health practices. Age and education are known to be strong predictors of test performance. In settings where validated tests are not available, common practice has been to adapt and apply them in similar fashion as settings where they were developed. However, demographic adjustments cannot be assumed de facto to be universal in their strength and domain associations. For example, South Africa (SA) and Zimbabwe are neighboring countries with some similarities in their demographic makeup, but with vastly different sociopolitical trajectories- Zimbabwe was colonially occupied until 1980 and SA was oppressed under Apartheid until 1994- which have impacted access to and quality of education by severely limiting educational opportunities for native citizens. The present study explored whether the direction and strength of relationships between age and education on NP test performance were similar or not between SA and Zimbabwe adults living with and without HIV.
Participants and Methods:
Data was extracted from two IRB-approved studies in SA and Zimbabwe with similar inclusion and exclusion criteria. The SA sample (n=214) was comprised of 56% females, 48% HIV-positive adults, mean age of 34 years, and a nine-year range in education (3-14 years). The Zimbabwe sample (n=212) was comprised of 68% females, 67% HIV-positive adults, mean age of 36 years, and a thirteen-year range in education (7-20 years). Participants completed NeuroScreen, a tablet-based battery of 12 brief NP tests adapted for indigenous SA and Zimbabwe languages. The two study samples were analyzed separately. Zero order correlations between each of the tests and age and gender were conducted to determine the influence of the demographic variables. Relationships with moderate correlations (r>0.3) in both samples were further analyzed using univariate ANOVA to examine the main effects and interactions of age and education
Results:
Overall, there was a similar pattern of results across samples, with nine tests showing no-to-low associative relationships with age and education respectively. Moderate, significant relationships were found between age, education and three tests of processing speed (Visual Discrimination A, Visual discrimination B, and Number Speed) in both samples. Age and education had different effects on Visual discrimination A across samples with a significant main effect for age but not education in SA [F(40,83)=3.060, p<0.01], whilst Zimbabwe had a significant main effect for education but not age [F(10,87)=4.541, p<0.01]. Visual Discrimination B and Number Speed showed significant main effects for both variables in both samples. However, there was a significant interaction for both tests in Zimbabwe only.
Conclusions:
The current study is novel in its exploration of country-specific relationships between NP test performance and demographic factors in settings where assessment science is emergent. Results demonstrate the presence of differential relationships between demographic variables on test performance which raises questions about the source of these differences. One important potential source is the socio-cultural context of each country and the intersection of demographic factors in these contexts. Further research is required to explore these considerations.
A total of 32 sweet potato genotypes were evaluated to assess the genetic diversity based on quantitative traits and molecular markers, as well as stability for yield and related traits. Wider variability was observed for the traits like vine length (181.2–501.3 cm), number of leaves/plant (103.0–414.0 cm), internodal length (3.20–14.80 cm), petiole length (6.5–21.3 cm), leaf length (8.50–14.5 cm), leaf breadth (8.20–15.30 cm), leaf area (42.50–115.62 cm2), tuber length (7.77–18.07 cm), tuber diameter (2.67–6.90 cm), tuber weight (65.60–192.09 g), tuber yield (7.77–28.87 t ha−1), dry matter (27.34–36.41%), total sugar (4.50–5.70%) and starch (18.50–29.92%) content. Desirable traits such as tuber yield, dry matter and starch content have shown high heritability (>60%) with moderate to high genetic advance. Under molecular analysis, a total of 232 alleles were observed from all 32 microsatellite markers, which ranged from 4 to 14 with an average of 7.77 alleles per locus. In the population, the average observed heterozygosity (0.51) was higher than the expected heterozygosity (0.49). The contribution of genotype, genotype by environment interaction to the total variations was found to be significant. Based on the multi-trait stability index (tuber length, tuber diameter, tuber weight and tuber yield), genotypes X-24, MLSPC-3, MLSPC-5, ARSPC-1 and TSP-12-12 were found to be most stable. Among them, the high-yielding and stable genotypes TSP-12-10 (26.0 t ha−1) and MLSPC-3 (23.9 t ha−1) can be promoted for commercial production or used as parental material in future crop improvement programmes.
Prior literature has documented how motives for cannabis use predict frequency of use and cannabis use problems among adolescents. However, few studies have examined possible moderating variables that may influence the association between cannabis use motives and frequency of use. The current study examines how risky decision-making moderates this association to help better understand which individuals are at greater risk for cannabis use escalation. The current study will be the first to examine the interactive effects of motives for cannabis use (i.e., health or recreational reasons) and risky decision-making on cannabis use trajectories among a sample of adolescent cannabis users.
Participants and Methods:
Data from 194 adolescent cannabis users aged 14–17 at baseline were analyzed as part of a larger longitudinal study. Participants included those who self-reported use of cannabis within six months prior to the baseline assessment. The Marijuana Reasons for Use Questionnaire (MJRUQ) was used to assess motives for cannabis use from a list of 13 items. A confirmatory factor analysis identified “health” and “recreational” factors for motives for cannabis use. Lifetime frequency of cannabis use (number of days used) was assessed through the Drug Use History Questionnaire, while risky decision-making was assessed using the Game of Dice Task. We used latent growth curve modeling and linear regression analyses to examine the interactive effects of motives for cannabis use and risky decision-making on initial levels of lifetime cannabis use at baseline, and rate of cannabis use escalation over time.
Results:
No significant interactive effects were found for health motives for cannabis use; however, we found significant main effects of health motives on initial levels of lifetime cannabis use at baseline (b = 100.82, p < .01) and rate of cannabis use escalation (b = 24.79, p < .01). Those with a greater proclivity to use cannabis for health purposes showed higher initial levels of lifetime use at baseline and steeper increases in the rate of cannabis use escalation relative to those less likely to use for health purposes. Furthermore, we found a significant interactive effect of recreational motives for use and risky decision-making on the rate of cannabis use escalation (b = -2.53, p < .01). Follow-up analyses revealed that among those less likely to use cannabis for recreational purposes, higher risky decision-making was associated with a steeper increase in the rate of cannabis use escalation relative to those who exhibited lower risky decision-making.
Conclusions:
The current study replicated findings suggesting that cannabis use motives influence cannabis use trajectories. We found that using cannabis primarily for health reasons was associated with higher initial levels and steeper increases in use regardless of decision-making. Furthermore, we found that both motives for use and risky decision-making interacted to influence associations with cannabis use trajectories. Specifically, among individuals reporting less cannabis use for recreational reasons, those with relatively riskier decision-making showed steeper increases in the rate of cannabis use escalation. These findings inform prevention and intervention practices that focus on decision-making by tailoring approaches based on an individual’s primary motives for cannabis use.
Print Knowledge in children starts with recognizing and characterizing printed figures; it is a precursor of other skills like letter knowledge and phonological awareness. The goal was to assess print knowledge components and their predictive value in emerging literacy in a sample of Mexican preschoolers.
Participants and Methods:
60 children (aged 4 to 6 years old; 50% boys and 50% girls) were tested with an analysis of the visual synthesis and the figure copy from the SNBP-MX and the Rey Complex Figure Test (children’s version).
Results:
Children with lower performance in the SNBP-MX cannot use visual information to perform correctly at the Rey Complex Figure. They have problems in the reproduction of the figure, and they do not respect the components of the Print Knowledge: 1) figure building characteristics (size, rotation, orientation) and function (relationship with the background and with other figures).
Conclusions:
Early visual perception skills impairments are related to the execution of elements from the Print Knowledge. Therefore, it is expected that children with low performance at visoperception and spatial tasks will have difficulties with early literacy. Since visual information is needed for the copy and learning of writing figures, print knowledge could be categorized as a predictor of the early word and letter recognition skills. We thank project PAPIIT IN308219 for sponsoring this research.
Class III obesity is associated with increased risk for cognitive impairment. Though hypothesized to be partially attributable to sedentary time (ST), past research examining the association between ST and cognitive function has produced mixed findings. One possible explanation is that studies do not typically account for the highly correlated and almost inverse relationship between ST and light intensity physical activity (LPA), such that ST displaces time engaging in LPA. Therefore, we aimed to evaluate whether: (1) higher ST-to-LPA time ratio associates with poorer performance across multiple cognitive domains in patients with Class III obesity seeking bariatric surgery; and (2) the associations differ by sex.
Participants and Methods:
Participants (N = 121, 21-65 years of age, BMI > 40 kg/m2) scheduled for either Roux-en-Y Gastric Bypass (RYGB) or Sleeve Gastrectomy (SG) completed the NIH Toolbox, a computerized neuropsychological assessment battery and wore a waist-mounted ActiGraph monitor during waking hours for 7 days to measure minutes/day spent in ST, LPA, and moderate-to-vigorous physical activity (MVPA). A ratio of time spent in ST-to-LPA was calculated by dividing the percentage of daily wear time spent in sedentary behavior (SB) by the percentage of daily wear time spent in LPA.
Results:
On average, participants (mean age = 43.22 years old and BMI = 45.83 kg/m2) wore the accelerometer for 909±176 minutes/day and spent 642±174 minutes/day in ST, 254±79 minutes/day in LPA, and 14±13 minutes/day in MVPA. Mean daily ST-to-LPA time ratio was 2.81 ± 1.3 (0.73-7.11). Overall, bivariate Pearson correlations found no significant relationships between LPA and cognitive performance on any of the NIH Toolbox subtests (r values = -.002 to -.158, all p values >.05). Additionally, bivariate Pearson correlations also found no significant relationships between daily ST-to-LPA time ratio and cognitive performance on any of the subtests (r values = .003 to .108, all p values >.05). However, higher ST-to-LPA was associated with lower scores on the Dimensional Change Card Sort Test in women (r = -.26, p = .01).
Conclusions:
Results showed that participants’ mean daily time spent in ST was 2.5 times higher than that spent in LPA and a higher ratio of ST-to-LPA was associated with poorer set-shifting in women with Class III obesity. Future studies should look to clarify underlying mechanisms, particularly studies examining possible sex differences in the cognitive benefits of PA. Similarly, intervention studies are also needed to determine if increasing LPA levels for individuals with Class III obesity would lead to improved cognitive performance by means of reducing ST.
In this work, the relationship between the velocity of an elongated bubble and its shape is investigated, in the case where the elongated bubble flows in a viscous liquid initially at rest in a pipe. The velocity, expressed as a Froude number, depends on the angle of the inclined pipe, the Eötvös number and the buoyancy Reynolds number. The diameter of the pipe and the surface tension being fixed, the Eötvös number remains constant; this study focuses on the dependence of the velocity on the pipe inclination angle and the viscosity of the liquid. The velocity of the elongated bubble was measured for different angles between 0 and 15 degrees and for liquid viscosities 10 to 200 times that of water. As the velocity of elongated bubbles depends closely on their shape, shadowgraphy coupled with particle image velocimetry was used. The results show that the velocity of the elongated bubbles is highly sensitive to the inclination angle of the pipe and to the viscosity of the liquid, particularly for low pipe inclinations and large viscosities. In the layer of liquid located downstream of the elongated bubble, laminar flow develops rapidly in the liquid, resulting from a balance between gravity and friction at the wall. The identification of the position of the stagnation point close to the nose of the elongated bubble and the curvature of the interface at this point helps to explain why the velocity of the elongated bubble decreases for low angles and high viscosities.
Learning process variables such as the serial position effect and learning ratio (LR) are predictive of cognitive decline and dementia. Gender differences on memory measures are well documented, but there is inconsistent evidence for gender effects on learning process variables. In the present study, we examined the relationship of serial position and LR to memory performance and to cortisol levels, considering gender as a potential moderator.
Participants and Methods:
Data were taken from a deidentified dataset of a study on stress and aging in which 123 healthy community-dwelling adults over age 50 completed various assessments. Our analyses included 100 participants (56% female, 93% white, Mage 60.65, Meducation 15.22 years) who completed all measures of interest. LR, primacy effect, and recency effect were calculated from the learning trials of the Auditory Verbal Learning Test (AVLT). Additional memory measures included recall measures from the AVLT and from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS). AUC cortisol was calculated from salivary cortisol samples taken across 6 time points in the study.
Results:
Women performed better than men on LR, primacy, and traditional memory measures (ps=<.001 to .018) but not on recency (p=.40). LR was moderately correlated with primacy (r=.481, p<.001) and weakly correlated with recency (r=.271, p=.008), after controlling for age, gender, and education. After controlling for age, gender, and education, better LR was related to better memory performance across all measures (rs=.276-.693, ps= <.001-.007) and better recency was related to better performance on all memory measures (rs=.212-.396, ps=<.001-.038). Better primacy was related to better AVLT immediate and delayed recall and RBANS Immediate Memory Index (rs=.326-.532, p<.001) but not RBANS delayed (r=.115, p=.263).
Hierarchical linear regressions were conducted to examine gender as a moderator of relationships between learning process variables and memory performance, after accounting for age, gender, and education. There were no gender by LR (ps=.349-.830) or gender by primacy interactions (ps=.124-.671). There was an interaction between gender and recency on AVLT memory measures (ps=.006-.022), but not on RBANS measures (ps=.076-.745). For men, higher recency was related to higher AVLT immediate and delayed recall (rs=.501-.541, ps<.001), but not for women (rs=.-.029-.020, ps=.839-.888), after controlling for age and education. The relationship of AUC salivary cortisol to learning process measures was also moderated by gender (LR/gender interaction p=.055; primacy/gender interaction p=.047; but not recency/gender p=.79). Interestingly, for women, higher cortisol was related to higher LR (r=.16) and higher primacy (r=.36), while for men, it was related to lower LR (r=-.22) and not to primacy (r=-.05). Cortisol was not related to recency (rs=-.04 to -.07).
Conclusions:
Women performed better on LR and primacy, as well as on other traditional memory variables, but gender did not appear to differentially impact the relationship of LR or primacy to memory outcomes. Findings suggest some differential relationships of recency to memory outcomes by gender. Results also suggested potential gender differences in the relationship of cortisol to learning process variables, but further study is necessary, especially with samples of individuals with memory impairment.
Illness perception, or the ways in which individuals understand and cope with injury, has been extensively studied in the broader medical literature and has been found to have important associations with clinical outcomes across a wide range of medical conditions. However, there is a dearth of knowledge regarding how perceptions of traumatic brain injury (TBI) influence outcome and recovery following injury, especially in military populations. The purpose of this study was to examine relationships between illness perception, as measured via symptom attribution, and neurobehavioral and neurocognitive outcomes in Veterans with TBI history.
Participants and Methods:
This cross-sectional study included 44 treatment-seeking Veterans (86.4% male, 65.9% white) with remote history of TBI (75.0% mild TBI). All Veterans were referred to the TBI Cognitive Rehabilitation Clinic at VA San Diego and completed a clinical interview, self-report questionnaires, and a neuropsychological assessment. A modified version of the Neurobehavioral Symptom Inventory (NSI) was administered to assess neurobehavioral symptom endorsement and symptom attribution. Symptom attribution was assessed by having participants rate whether they believe each NSI item was caused by TBI. A total symptom attribution score was computed, as well as the standard NSI total and symptom cluster scores (i.e., vestibular, somatic, cognitive, and affective symptom domains). Three cognitive composite scores (representing mean performance) were also computed, including memory, attention/processing speed, and executive functioning. Participants were excluded if they did not complete the NSI attribution questions or they failed performance validity testing.
Results:
Results showed that the symptoms most frequently attributed to TBI included forgetfulness (82%), poor concentration (80%), and slowed thinking (77%). There was a significant positive association between symptom attribution and the NSI total score (r = 0.62, p < .001), meaning that greater attribution of symptoms to TBI was significantly associated with greater symptom endorsement overall.
Symptom attribution was also significantly associated with all four NSI symptom domains (r’s = 0.47-0.66; all p’s < .001), with the strongest relationship emerging between symptom attribution and vestibular symptoms. Finally, linear regressions demonstrated that symptom attribution but not symptom endorsement was significantly associated with objective cognitive functioning. Specifically, greater attribution of symptoms to TBI was associated with worse memory (ß = -0.33, p = .035) and attention/processing speed (ß = -0.40, p = .013) performance.
Conclusions:
Results showed significant associations between symptom attribution and (1) symptom endorsement and (2) objective cognitive performance in Veterans with a remote history of TBI. Taken together, findings suggest that Veterans who attribute neurobehavioral symptoms to their TBI are at greater risk of experiencing poor long-term outcomes. Although more research is needed to understand how illness perception influences outcomes in this population, results highlight the importance of early psychoeducation regarding the anticipated course of recovery following TBI.
Epilepsy is a chronic neurological disease, and surgery is a common treatment option for persons who do not respond to medication. Neuropsychology plays an important role in the epilepsy presurgical workup, characterizing the cognitive functioning of patients with epilepsy as well as assisting in the determination of which hemisphere seizures originate in the brain through testing of different cognitive functions. NeuroQuant is a relatively newer software that analyzes clinical neuroimaging to quantify brain volume. The objective of this study was to determine if changes in left versus right total hippocampal volume predicted changes in verbal versus nonverbal memory performance.
Participants and Methods:
Cognitive performance and NeuroQuant bilateral hippocampal volume were examined in a cross-sectional sample of 37 patients with epilepsy. All patients had undergone a comprehensive presurgical neuropsychological evaluation as well as magnetic resonance imaging (MRI) and these results were analyzed using a series of linear regression analyses.
Results:
Total left hippocampal volume was a significant predictor of delayed verbal free recall (RAVLT F(1, 31) = 4.79, p< .036, RA2 = 0.13, and ß=.37, p<.036). Even when controlling for the effects of biological sex, education, and depression, left hippocampal volume remained a significant predictor (ß=.42, p<.025). Total left hippocampal volume did not predict other verbal memory scores. Total right hippocampal volume was a significant predictor of delayed nonverbal figure recall (RCFT F(1, 31)= 6.46, p<.016), RA2 = .17 and ß=.42) p<.016). When controlling for the effects of biological sex, education, and depression, right hippocampal volume remained a significant predictor (ß=.404, p<.026). Total right hippocampal volume did not predict other nonverbal memory scores.
Conclusions:
These findings validate prior research demonstrating the importance of the left hippocampus in verbal memory and right hippocampus in nonverbal memory. Findings also demonstrate the clinical utility of neuropsychological evaluation in determining laterality in the epilepsy presurgical workup process, as well as support NeuroQuants’ inclusion as an additional consideration in that process.
To examine whether suboptimal performance as determined by formal validity testing would predict neurocognitive scores in a sample of 83 pre-surgical, non-litigating epilepsy patients.
Participants and Methods:
Participants were 83 patients who underwent comprehensive outpatient neuropsychological testing as part as their evaluation as epilepsy surgery candidates. The sample consisted of 41 females and 42 males, with 72 patients identifying as White, 5 as Black, 2 as Hispanic, 1 as Asian, and 2 as other. Mean age was 36 (SD=12.4) mean FSIQ was 87 (SD=12.7), mean years of education of 12.9 (SD=2.1). Each patient’s assessment included a stand-alone performance validity test (PVTs)— Word Memory Test (WMT), the Test of Memory Malingering (TOMM), or the Medical Symptom Validity Test (MSVT)—as well as two embedded measures of validity—the California Verbal Learning Test Forced Choice (CVLT FC) and WAIS-IV Reliable Digit Span (RDS). Pass/fail rates were analyzed, with valid performance being defined as pass score on at least two of the completed PVTs (N=73 Pass Effort group 86.9%; N=10 Failed Effort group 11.9%). Point-biserial Pearson correlations were conducted to determine the relationship between validity pass/fail status and WAIS-IV FSIQ, VCI, and PRI scores, CVLT-II Trials 1-5 Total T scores, CVLT-II Long Delay Free Recall z scores, WMS-III Logical Memory II T scores, BVMT Total Recall T scores, BVMT Delayed Recall T scores, and Trail Making Test (TMT) B T scores.
Results:
Significant relationships were found between Failed Effort group and all neurocognitive scores except BVMT Total Recall. On average, the Failed Effort group obtained significantly lower FSIQ (M=76.57, SD=10.94), VCI (M=80.89, SD=16.03), PRI (M=81.00, SD=14.91), CVLT-II Trials 1-5 Total (M =34, SD=6.89), CVLT-II Long Delay Free Recall (M =-2.44, SD=1.43), WMS-IV Logical Memory II (M =4.83, SD=2.79), BVMT Delayed Recall (M=26.38, SD=6.41), and TMT B (M=29.70, SD=11.46) standard scores compared to the Pass Effort group (FSIQ M=88.09, SD=12.52; VCI M=92.13, SD=13.61; PRI M=91.14, SD=12.06; CVLT-II Trials 1-5 Total M=47.86, SD=12.02; CVLT- II Long Delay Free Recall M=-.44, SD=1.11; WMS-III Logical Memory II M=8.41, SD=3.17; BVMT Delayed Recall M=39.19, SD=12.66; TMT B M=39.34, SD=13.18). Correlation coefficients were r=-.266* (FSIQ), r=-.255* (VCI), r=-.271* (PRI), r=.361**(CVLT-II Total), r=-.474** (CVLT-II LDFR), r=-.298** (WMS-IV LM II), r=-.308** (BVMT DR), and r=-.240* (TMTB). All coefficients were significant at the .05 (*) or .01 (**) level.
Conclusions:
Results suggest that pass/fail status on formal validity testing predicts depressed performance on a variety of neurocognitive measures. Therefore, predicting surgical outcome of resection/ablation (e.g., compensation of contralateral hemisphere) should not be based upon neuropsychological memory performance alone when there are failures on tests of engagement as memory scores have strong correlations to pass/fail status on formal validity testing. Overall, this emphasizes the importance of routinely integrating PVTs as part of pre-epilepsy surgery neuropsychological evaluations.
Telecommunication-assisted neuropsychological assessment (teleNP) has become more widespread, particularly in response to the COVID-19 pandemic. However, comparatively few studies have evaluated in-home teleNP testing and none, to our knowledge, have evaluated the National Alzheimer’s Coordinating Center’s (NACC) Uniform Data Set version 3 tele-adapted test battery (UDS v3.0 t-cog). The current study compares in-home teleNP administration of the UDS v3.0, acquired while in-person activities were suspended due to COVID-19, with a prior in-person UDS v3.0 evaluation.
Participants and Methods:
210 participants from the Michigan Alzheimer’s Disease Research Center’s longitudinal study of memory and aging completed both an in-person UDS v3.0 and a subsequent teleNP UDS v3.0 evaluation. The teleNP UDS v3.0 was administered either via video conference (n = 131), telephone (n = 75), or hybrid format (n = 4) with approximately 16 months between evaluations (mean = 484.7 days; SD = 122.4 days; range = 320-986 days). The following clinical phenotypes were represented at the initial assessment period (i.e., the most recent in-person UDS v3.0 evaluation prior to the teleNP UDS v3.0): cognitively healthy (n = 138), mild cognitive impairment (MCI; n = 60), dementia (n = 11), and impaired not MCI (n = 1). Tests included both the in-person and teleNP UDS v3.0 measures, as well as the Hopkins Verbal Learning Test-Revised (HVLT-R) and Letter “C” Fluency.
Results:
We calculated intraclass correlation coefficients (ICC) with raw scores from each time point for the entire sample. Sub-analyses were conducted for each phenotype among participants with an unchanged consensus research diagnosis: cognitively healthy (n = 122), MCI (n = 47), or cognitively impaired (i.e., MCI, dementia, and impaired not MCI) (n = 66). Test-retest reliability across modalities and clinical phenotypes was, in general, moderate. The poorest agreement was associated with the Trail Making Test (TMT) - A (ICC = 0.00; r = 0.027), TMT - B (ICC = 0.26; r = 0.44), and Number Span Backward (ICC = 0.49). The HVLT-R demonstrated moderate reliability overall (ICC = 0.51-0.68) but had notably weak reliability for cognitively healthy participants (ICC = 0.12-0.36). The most favorable reliability was observed in Craft Story 21 Recall - Delayed (ICC = 0.77), Letter Fluency (C, F, and L) (ICC = 0.74), Multilingual Naming Test (MINT) (ICC = 0.75), and Benson Complex Figure – Delayed (ICC = 0.79).
Conclusions:
Even after accounting for the inherent limitations of this study (e.g., significant lapse of time between testing intervals), our findings suggest that the UDS v3.0 teleNP battery shows only modest relationships with its in-person counterpart. Particular caution should be used when interpreting measures showing questionable reliability, though we encourage further investigation of remote vs. in-person testing under more controlled conditions.