We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To establish quick-reference criteria regarding the frequency of statistically rare changes in seven neuropsychological measures administered to older adults.
Method:
Data from 935 older adults examined over a two-year interval were obtained from the Alzheimer’s Disease Neuroimaging Initiative. The sample included 401 cognitively normal older adults whose scores were used to determine the natural distribution of change scores for seven cognitive measures and to set change score thresholds corresponding to the 5th percentile. The number of test scores that exceeded these thresholds were counted for the cognitively normal group, as well as 381 individuals with mild cognitive impairment (MCI) and 153 individuals with dementia. Regression analyses examined whether the number of change scores predicted diagnostic group membership beyond demographic covariates.
Results:
Only 4.2% of cognitively normal participants obtained two or more change scores that fell below the 5th percentile of change scores, compared to 10.6% of the stable MCI participants and 38.6% of those who converted to dementia. After adjusting for age, gender, race/ethnicity, and premorbid estimates, the number of change scores below the 5th percentile significantly predicted diagnostic group membership.
Conclusions:
It was uncommon for older adults to have two or more change scores fall below the 5th percentile thresholds in a seven-test battery. Higher change counts may identify those showing atypical cognitive decline.
Placebo and nocebo effects are widely reported across psychiatric conditions, yet have seldom been examined in the context of gambling disorder. Through meta-analysis, we examined placebo effects, their moderating factors, and nocebo effects, from available randomised, controlled pharmacological clinical trials in gambling disorder.
Methods:
We searched, up to 19 February 2024, a broad range of databases, for double-blind randomised controlled trials (RCTs) of medications for gambling disorder. Outcomes were gambling symptom severity and quality of life (for efficacy), and drop outs due to medication side effects in the placebo arms.
Results:
We included 16 RCTs (n = 833) in the meta-analysis. The overall effect size for gambling severity reduction in the placebo arms was 1.18 (95%CI 0.91–1.46) and for quality of life improvement was 0.63 (0.42-0.83). Medication class, study sponsorship, trial duration, baseline severity of gambling and publication year significantly moderated effect sizes for at least some of these outcome measures. Author conflict of interest, placebo run-in, gender split, severity scale choice, age of participants or unbalanced randomisation did not moderate effect sizes. Nocebo effects leading to drop out from the trial were observed in 6% of participants in trials involving antipsychotics, while this was less for other medication types.
Conclusion:
Placebo effects in trials of pharmacological treatment of gambling disorder are large, and there are several moderators of this effect. Nocebo effects were measureable and may be influenced by medication class being studied. Practical implications of these new findings for the field are discussed, along with recommendations for future clinical trials.
No drugs are currently approved for the treatment of borderline personality disorder (BPD). These studies (a randomised study and its open-label extension) aimed to evaluate the efficacy, safety and tolerability of brexpiprazole for the treatment of BPD.
Methods:
The Phase 2, multicentre, randomised, double-blind, placebo-controlled, parallel-group study enrolled adult outpatients with BPD. After a 1-week placebo run-in, patients were randomised 1:1 to brexpiprazole 2–3 mg/day (flexible dose) or placebo for 11 weeks. The primary endpoint was change in Zanarini Rating Scale for BPD total score from randomisation (Week 1) to Week 10 (timing of randomisation and endpoint blinded to investigators and patients). The Phase 2/3, multicentre, open-label extension study enrolled patients who completed the randomised study; all patients received brexpiprazole 2–3 mg/day (flexible dose) for 12 weeks. Safety assessments included treatment-emergent adverse events (TEAEs).
Results:
Brexpiprazole was not statistically significantly different from placebo on the primary endpoint of the randomised study (N = 324 randomised; N = 110 analysed per treatment group; least squares mean difference −1.02; 95% confidence limits −2.75, 0.70; p = 0.24). Numerical efficacy advantages for brexpiprazole were observed at other time points. The most common TEAE in the randomised study was akathisia (brexpiprazole, 14.0%; placebo, 1.2%); data from the open-label study (N = 199 analysed) suggested that TEAEs were transient.
Conclusion:
The primary endpoint of the randomised study was not met. Further research on brexpiprazole in BPD is warranted based on possible efficacy signals at other time points and its safety profile.
Identifying persons with HIV (PWH) at increased risk for Alzheimer’s disease (AD) is complicated because memory deficits are common in HIV-associated neurocognitive disorders (HAND) and a defining feature of amnestic mild cognitive impairment (aMCI; a precursor to AD). Recognition memory deficits may be useful in differentiating these etiologies. Therefore, neuroimaging correlates of different memory deficits (i.e., recall, recognition) and their longitudinal trajectories in PWH were examined.
Design:
We examined 92 PWH from the CHARTER Program, ages 45–68, without severe comorbid conditions, who received baseline structural MRI and baseline and longitudinal neuropsychological testing. Linear and logistic regression examined neuroanatomical correlates (i.e., cortical thickness and volumes of regions associated with HAND and/or AD) of memory performance at baseline and multilevel modeling examined neuroanatomical correlates of memory decline (average follow-up = 6.5 years).
Results:
At baseline, thinner pars opercularis cortex was associated with impaired recognition (p = 0.012; p = 0.060 after correcting for multiple comparisons). Worse delayed recall was associated with thinner pars opercularis (p = 0.001) and thinner rostral middle frontal cortex (p = 0.006) cross sectionally even after correcting for multiple comparisons. Delayed recall and recognition were not associated with medial temporal lobe (MTL), basal ganglia, or other prefrontal structures. Recognition impairment was variable over time, and there was little decline in delayed recall. Baseline MTL and prefrontal structures were not associated with delayed recall.
Conclusions:
Episodic memory was associated with prefrontal structures, and MTL and prefrontal structures did not predict memory decline. There was relative stability in memory over time. Findings suggest that episodic memory is more related to frontal structures, rather than encroaching AD pathology, in middle-aged PWH. Additional research should clarify if recognition is useful clinically to differentiate aMCI and HAND.
Levofloxacin prophylaxis reduces bloodstream infections in neutropenic patients with acute myeloid leukemia or relapsed acute lymphoblastic leukemia. A retrospective, longitudinal cohort study compares incidence of bacteremia, multidrug-resistant organisms (MDRO), and Clostridioides difficile (CDI) between time periods of levofloxacin prophylaxis implementation. Benefits were sustained without increasing MDRO or CDI.
Trichotillomania and skin picking disorder have been characterized as body-focused repetitive behavior (BFRB) disorders (i.e., repetitive self-grooming behaviors that involve biting, pulling, picking, or scraping one’s own hair, skin, lips, cheeks, or nails). Trichotillomania and skin picking disorder have also historically been classified, by some, as types of compulsive self-injury as they involve repetitive hair pulling and skin picking, respectively. The question of the relationship of these disorders to more conventional forms of self-injury such as cutting or self-burning remains incompletely investigated. The objective of this study was to examine the relationship of these two disorders with non-suicidal self-injury (NSSI).
Methods
Adults with trichotillomania (n = 93) and skin picking (n = 105) or both (n = 82) were recruited from the general population using advertisements and online support groups and completed an online survey. Participants completed self-report instruments to characterize clinical profiles and associated characteristics. In addition, each participant completed a mental health history questionnaire.
Results
Of the 280 adults with BFRB disorders, 141 (50.1%) reported a history of self-injury independent of hair pulling and skin picking. Participants with a history of self-injury reported significantly worse pulling and picking symptoms (p < .001) and were significantly more likely to have co-occurring alcohol problems (p < .001), borderline personality disorder (p < .001), buying disorder (p < .001), gambling disorder (p < .001), compulsive sex behavior (p < 001), and binge eating disorder (p = .041).
Conclusions
NSSI appears common in trichotillomania and skin picking disorder and may be part of a larger constellation of behaviors associated with impulse control or reward-related dysfunction.
Gambling disorder affects 0.5–2.4% of the population and shows strong associations with lifetime alcohol use disorder. Very little is known regarding whether lifetime alcohol use disorder can impact the clinical presentation or outcome trajectory of gambling disorder.
Methods
Data were pooled from previous clinical trials conducted on people with gambling disorder, none of whom had current alcohol use disorder. Demographic and clinical variables were compared between those who did versus did not have lifetime alcohol use disorder.
Results
Of the 621 participants in the clinical trials, 103 (16.6%) had a lifetime history of alcohol use disorder. History of alcohol use disorder was significantly associated with male gender (relative risk [RR] = 1.42), greater body weight (Cohen’s D = 0.27), family history of alcohol use disorder in first-degree relative(s) (RR = 1.46), occurrence of previous hospitalization due to psychiatric illness (RR = 2.68), and higher gambling-related legal problems (RR = 1.50). History of alcohol use disorder was not significantly associated with other variables that were examined, such as severity of gambling disorder or extent of functional disability. Lifetime alcohol use disorder was not significantly associated with the extent of clinical improvement in gambling disorder symptoms during the subsequent clinical trials.
Conclusions
These data highlight that lifetime alcohol use disorder is an important clinical variable to be considered when assessing gambling disorder because it is associated with several untoward features (especially gambling-related legal problems and prior psychiatric hospitalization). The study design enabled these associations to be disambiguated from current or recent alcohol use disorder.
Both impulsivity and compulsivity have been identified as risk factors for problematic use of the internet (PUI). Yet little is known about the relationship between impulsivity, compulsivity and individual PUI symptoms, limiting a more precise understanding of mechanisms underlying PUI.
Aims
The current study is the first to use network analysis to (a) examine the unique association among impulsivity, compulsivity and PUI symptoms, and (b) identify the most influential drivers in relation to the PUI symptom community.
Method
We estimated a Gaussian graphical model consisting of five facets of impulsivity, compulsivity and individual PUI symptoms among 370 Australian adults (51.1% female, mean age = 29.8, s.d. = 11.1). Network structure and bridge expected influence were examined to elucidate differential associations among impulsivity, compulsivity and PUI symptoms, as well as identify influential nodes bridging impulsivity, compulsivity and PUI symptoms.
Results
Results revealed that four facets of impulsivity (i.e. negative urgency, positive urgency, lack of premeditation and lack of perseverance) and compulsivity were related to different PUI symptoms. Further, compulsivity and negative urgency were the most influential nodes in relation to the PUI symptom community due to their highest bridge expected influence.
Conclusions
The current findings delineate distinct relationships across impulsivity, compulsivity and PUI, which offer insights into potential mechanistic pathways and targets for future interventions in this space. To realise this potential, future studies are needed to replicate the identified network structure in different populations and determine the directionality of the relationships among impulsivity, compulsivity and PUI symptoms.
Difficulties with emotion regulation have been associated with multiple psychiatric conditions. In this study, we aimed to investigate emotional regulation difficulties in young adults who gamble at least occasionally (ie, an enriched sample), and diagnosed with a range of psychiatric disorders using the validated Difficulties in Emotion Regulation Scale (DERS).
Methods
A total of 543 non-treatment-seeking individuals who had engaged in gambling activities on at least 5 occasions within the previous year, aged 18–29 were recruited from general community settings. Diagnostic assessments included the Mini International Neuropsychiatric Inventory, Minnesota Impulsive Disorders Interview, attention-deficit/hyperactivity disorder World Health Organization Screening Tool Part A, and the Structured Clinical Interview for Gambling Disorder. Emotional dysregulation was evaluated using DERS. The profile of emotional dysregulation across disorders was characterized using Z-scores (those with the index disorder vs. those without the index disorder).
Results
Individuals with probable ADHD displayed the highest level of difficulties in emotional regulation, followed by intermittent explosive disorder, social phobia, and generalized anxiety disorder. In contrast, participants diagnosed with obsessive-compulsive disorder showed relatively lower levels of difficulties with emotional regulation.
Conclusions
This study highlights the importance of recognizing emotional dysregulation as a trans-diagnostic phenomenon across psychiatric disorders. The results also reveal differing levels of emotional dysregulation across diagnoses, with potential implications for tailored treatment approaches. Despite limitations such as small sample sizes for certain disorders and limited age range, this study contributes to a broader understanding of emotional regulation’s role in psychiatric conditions.
The catechol-o-methyltransferase (COMT) inhibitor tolcapone constitutes a potentially useful probe of frontal cortical dopaminergic function. The aim of this systematic review was to examine what is known of effects of tolcapone on human cognition in randomized controlled studies.
Methods
The study protocol was preregistered on the Open Science Framework. A systematic review was conducted using PubMed to identify relevant randomized controlled trials examining the effects of tolcapone on human cognition. Identified articles were then screened against inclusion and exclusion criteria.
Results
Of the 22 full-text papers identified, 13 randomized control trials were found to fit the pre-specified criteria. The most consistent finding was that tolcapone modulated working memory; however, the direction of effect appeared to be contingent on the COMT polymorphism (more consistent evidence of improvement in Val–Val participants). There were insufficient nature and number of studies for meta-analysis.
Conclusion
The cognitive improvements identified upon tolcapone administration, in some studies, are likely to be due to the level of dopamine in the prefrontal cortex being shifted closer to its optimum, per an inverted U model of prefrontal function. However, the results should be interpreted cautiously due to the small numbers of studies. Given the centrality of cortical dopamine to understanding human cognition, studies using tolcapone in larger samples and across a broader set of cognitive domains would be valuable. It would also be useful to explore the effects of different dosing regimens (different doses; and single versus repeated administration).
Trichotillomania (TTM) is a mental health disorder characterized by repetitive urges to pull out one’s hair. Cognitive deficits have been reported in people with TTM compared to controls; however, the current literature is sparse and inconclusive about affected domains. We aimed to synthesize research on cognitive functioning in TTM and investigate which cognitive domains are impaired.
Methods
After preregistration on the International Prospective Register of Systematic Reviews (PROSPERO), we conducted a comprehensive literature search for papers examining cognition in people with TTM versus controls using validated tests. A total of 793 papers were screened using preestablished inclusion/exclusion criteria, yielding 15 eligible studies. Random-effects meta-analysis was conducted for 12 cognitive domains.
Results
Meta-analysis demonstrated significant deficits in motor inhibition and extradimensional (ED) shifting in people with TTM versus controls as measured by the stop-signal task (SST) (Hedge’s g = 0.45, [CI: 0.14, 0.75], p = .004) and ED set-shift task (g = 0.38, [CI: 0.13, 0.62], p = .003), respectively. There were no significant between-group differences in the other cognitive domains tested: verbal learning, intradimensional (ID) shifting, road map spatial ability, pattern recognition, nonverbal memory, executive planning, spatial span length, Stroop inhibition, Wisconsin card sorting, and visuospatial functioning. Findings were not significantly moderated by study quality scores.
Conclusions
Motor inhibition and ED set-shifting appear impaired in TTM. However, a cautious interpretation of results is necessary as samples were relatively small and frequently included comorbidities. Treatment interventions seeking to improve inhibitory control and cognitive flexibility merit exploration for TTM.
Analyzing data from the Puerto Rican English in Philadelphia (PREP) corpus, we investigate participation in TH-stopping, a socially stigmatized yet stable variable documented in Philadelphia. While previous studies have been impressionistic and have considered voiced and voiceless tokens to pattern together, this work validates novel, acoustically based stopping indices: mean harmonics-to-noise ratio for voiced tokens and skewness for voiceless tokens. We apply these indices to the corpus data and analyze stopping under a Bayesian framework, and we compare results from a model built from impressionistic coding of a subset of the same data. We find convergent evidence that TH-stopping is a stable variable in the Puerto Rican English data as well. Findings are compared with those of existing studies, noting future directions for research on the variable and underscoring the importance of establishing demographically representative baselines for linguistic research in diverse urban centers.
Accurately interpreting cognitive change is an essential aspect of clinical care for older adults. Several approaches to identifying 'true’ cognitive change in a single cognitive measure are available (e.g., reliable change methods, regression-based norms); however, neuropsychologists in clinical settings often rely on simple score differences rather than advanced statistics, especially since multiple scores compose a typical battery. This study sought to establish quick-reference normative criteria to help neuropsychologists identify how frequently significant change occurs across multiple measures in cognitively normal older adults.
Participants and Methods:
Data were obtained from the National Alzheimer’s Coordinating Center (NACC). Participants were 845 older adults who were classified as cognitively normal at baseline and at 24-month follow-up. In NACC, these clinical classifications are made separately from the assessment of cognitive performance, including cognitive change. The sample was 34.9% female, 83.5% White, 13.1% Black 2.3% Asian, and 1.1% other race with a mean age of 70.7 years (SD=10.2). Of the sample, 95.5% identified as non-Hispanic. Mean education was 16.1 years (SD=2.8). The cognitive battery entailed: Craft Story Immediate and Delayed Recall, Benson Copy and Delayed Recall, Number Span (Forward & Backward), Category Fluency (Animals & Vegetables), Trails A&B, Multilingual Naming Test, and Verbal Fluency (F&L). Change scores between baseline performance and follow-up were calculated for each measure. The natural distribution of change scores was examined for each measure and cut points representing the 5th and 10th percentile were applied to each distribution to classify participants who exhibited substantial declines in performance on each measure. We then examined the multivariate frequency of statistically rare change scores for each individual.
Results:
As expected in a normal sample, overall cognitive performance was generally stable between baseline and 24-month follow-up. Across cognitive measures, 81.9% of participants had at least one change score fall below the 10th percentile in the distribution of change scores, and 55.7% had at least one score below the 5th percentile, 49.3% of participants had two or more change scores that fell below the 10th percentile and 21.1% with two or more below the 5th percentile. There were 26.7% participants that had three or more change scores below the 10th percentile, and 6.4% of participants had three change scores below the 5th percentile.
Conclusions:
Among cognitively normal older adults assessed twice at a 24-month interval with a battery of 13 measures, it was not uncommon for an individual to have at least one score fall below the 10th percentile (82% of the sample) or even the 5th percentile (56%) in the natural distribution of change scores. There were 27% participants that had three or more declines in test performance below the 10th percentile; in comparison, only 6% of the sample had three or more change scores at the 5th percentile. This suggests that individuals who exhibit more multivariate changes in performance than these standards are likely experiencing an abnormal rate of cognitive decline. Our findings provide a preliminary quick-reference approach to identifying clinically significant cognitive change. Future studies will explore additional batteries and examine multivariate frequencies of change in clinical populations.
Accurately interpreting change in cognitive functioning is an essential aspect of clinical care for older adults. Several approaches to identifying ‘true’ cognitive change in a single cognitive measure are available (e.g., reliable change methods, regression-based norms); however, neuropsychologists in clinical settings often rely on simple score differences rather than advanced analytical procedures especially since they examine multiple test performances. This study sought to establish quick-reference normative criteria to help neuropsychologists identify how frequently significant change occurs across multiple cognitive measures in cognitively normal older adults.
Participants and Methods:
Data were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Participants were 401 older adults who were classified as cognitively normal at baseline and at 24-month follow-up. In ADNI, these clinical classifications are made separately from the assessment of cognitive performance, including cognitive change. The sample was 50.1% female, 93.5% non-Hispanic White, 4.0% non-Hispanic Black, 1.5% Asian American, and 1.0% other race/ethnicity, with a mean age of 76.0 years (SD = 4.9). Mean education was 16.4 years (SD = 2.7). The cognitive battery included: Boston Naming Test, Category Fluency Test, Trails A & B, Clock Drawing Test, and Auditory Verbal Learning Test, Trial 1-5 Total and Delayed Recall. Change scores between baseline performance and 24-month follow-up were calculated for each measure. The natural distribution of change scores was examined for each measure and cut points representing the 5th and 10th percentile were applied to each distribution to classify participants who exhibited substantial declines in performance on a given measure. We then examined the multivariate frequency of statistically rare change scores for each individual.
Results:
As expected in a normal sample, overall cognitive performance was generally stable between baseline and 24-month followup. Across cognitive measures, 43.6% of participants had at least one change score fall below the 10th percentile in the distribution of change scores, and 21.9% had at least one score below the 5th percentile. 13.0% of participants had two or more change scores that fell below the 10th percentile, in comparison to 4.5% with two or more below the 5th percentile. 3.2% of participants had three or more change scores below the 10th percentile, versus 0.5% of participants who had three change scores below the 5th percentile.
Conclusions:
Among cognitively normal older adults assessed twice at a 24-month interval with a battery of seven measures, it was not uncommon for an individual to have at least one score fall below the 10th percentile (43% of the sample) or even the 5th percentile (21%) in the natural distribution of change scores. However, only 3.2% of normals had more than two declines in test performance below the 10th percentile, and less than 1% of the sample at more than one change score at the 5th percentile. This suggests that individuals who exhibit more multivariate changes in performance than these standards are likely experiencing an abnormal rate of cognitive decline. Our findings provide a preliminary quick-reference approach to identifying clinically significant cognitive change. Future studies will explore additional batteries and examine multivariate frequencies of change in clinical populations.
Some active-duty military service members and veterans experience combinations of persistent traumatic stress, depression, suicidal ideation, anger, aggressive behavior, substance misuse, sleep disturbance, complicated grief, moral injury, headaches and migraines, chronic bodily pain, and cognitive weakness or deficits. The purpose of this study is to describe the clinical outcomes of active-duty service members and veterans who have completed the traumatic brain injury (TBI) and brain health track of a two-week intensive clinical treatment and rehabilitation program.
Participants and Methods:
The sample included 141 participants, with a history of TBI, in the Intensive Clinical Program (ICP). The ICP is a multidisciplinary, two-week treatment and rehabilitation program for active duty service members and veterans with complex psychological, cognitive, and physical health concerns. The program is comprised of daily individual therapy, group psychotherapy, psychoeducation, skills-building groups, and complementary and alternative medicine treatments. Participants in the ICP completed the following measures prior to initiating treatment and immediately following completion of treatment: Neurobehavioral Symptom Inventory (NSI), Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5), Patient Health Questionnaire-9 (PHQ-9), Self-Efficacy for Symptom Management Scale (SE-SMS), and Patient-Reported Outcomes Measurement Information System (PROMIS)-Satisfaction with Participation in Social Roles and Activities-Short Form 8a, version 1.0 (PROMIS-S). Wilcoxon signed ranks tests were used to examine differences in scores on self-report measures from pretreatment to posttreatment for the full sample and within three subgroups stratified by age (in years: 20-34; 35-45; and 46-66). For the NSI, changes in the proportion of participants endorsing moderate or worse levels of individual symptoms from pretreatment to posttreatment were assessed using McNemar’s tests. Alpha levels were set at p<0.05 for all analyses.
Results:
Participants reported statistically significant improvements across all of the administered measures (NSI, PCL-5, PHQ-9, PROMIS-S, and SE-SMS) upon conclusion of treatment. Effect sizes ranged from medium to large (d=0.34-1.04) for the full sample. Effect sizes were largely consistent across age subgroups (20-34: d=0.32-1.05; 35-45: d=0.55-0.96; 46-66: d=0.28-1.05). The magnitude of change on the SE-SMS appeared to be less with increasing age (20-34: d=1.05; 35-45: d=0.69; 46-66: d=0.28). Individual item analyses for the NSI revealed statistically significant reductions in the proportion of participants endorsing moderate or greater severity from pretreatment to posttreatment for 18 of 22 symptoms.
Conclusions:
Active duty service members and veterans participating in the two-week TBI and brain health intensive clinical program reported considerable symptom reduction at the conclusion of the program. Further research is indicated to assess the durability of symptom reduction.
Due to decades of structural and institutional racism, minoritized individuals in the US are more likely to live in low socioeconomic neighborhoods, which may underlie the observed greater risk for neurocognitive impairment as they age. However, these relationships have not been examined among people aging with HIV. To investigate neurocognitive disparities among middle- and older-aged Latino and non-Latino White people living with HIV (PWH), and whether neighborhood socioeconomic deprivation may partially mediate these relationships.
Participants and Methods:
Participants were 372 adults ages 40-85 living in southern California, including 186 Latinos (94 PWH, 92 without HIV) and 186 non-Latino (NL) Whites (94 PWH, 92 without HIV) age-matched to the Latino group (for the overall cohort: Age M=57.0, SD=9.1, Education: M=12.7, SD=3.9, 38% female; for the group of PWH: 66% AIDS, 88% on antiretroviral therapy [ART]; 98% undetectable plasma RNA [among those on ART]). Participants completed psychiatric and neuromedical evaluations and neuropsychological tests of verbal fluency, learning and memory in person or remotely. Neuropsychological results were converted to demographically-unadjusted global scaled scores for our primary outcome. A neighborhood socioeconomic deprivation variable (SESDep) was generated for census tracts in San Diego County using American Community Survey 2013-2017 data. Principal components analysis was used to create one measure using nine variables comprising educational (% with high school diploma), occupational (% unemployed), economic (rent to income ratio, % in poverty, (% female-headed households with dependent children, % with no car, % on public assistance), and housing (% rented housing, % crowded rooms) factors. Census tract SESDep values were averaged for a 1km radius buffer around participants’ home addresses.
Results:
Univariable analyses (independent samples t-tests and Chi-square tests) indicated Latinos were more likely to be female and had fewer years of formal education than NL-Whites (ps<.05). Latino PWH had higher nadir CD4 than White PWH (p=.02). Separate multivariable regression models in the overall sample, controlling for demographics and HIV status, showed Latinos had significantly lower global scaled scores than Whites (b=-0.59; 95%CI-1.13, -0.06; p=.03) and lived in more deprived neighborhoods (b=0.62; 95%CI=0.36, 0.88; p<.001). More SES deprivation was significant associated with worse global neurocognition in an unadjusted linear regression (b=-0.55; 95%CI=-0.82, -0.28; p<.001), but similar analyses controlling for demographics and HIV status, showed SESDep was not significantly related to global scaled scores (b=-0.11; 95%CI= -0.36, 0.14; p=.40). Exploratory analyses examined primary language (i.e., English vs Spanish) as a marker of Hispanic heterogeneity and its association with neurocognition and SESDep. Controlling for demographics and HIV status, both English-speaking (b=0.33; 95%CI=0.01. 0.64; p=.04) and Spanish-speaking Latinos (b=0.88; 95%CI=0.58, 1.18; p<.001) lived in significantly greater SESDep neighborhoods than Whites, with SESDep greater for Spanish-speakers than English-speakers (p<.001). However, only English-speaking Latinos had significantly lower neurocognition than Whites (b=-0.91; 95%CI=0-1.57, -0.26; p<.01; Spanish-speakers: b=-0.27; 95%CI=-0.93, 0.38; p=.41).
Conclusions:
Among our sample of diverse older adults living with and without HIV, English-speaking Latinos showed worse neurocognition than Whites. Though SES neighborhood deprivation was worse among Latinos (particularly Spanish-speakers) it was not associated with neurocognitive scores after adjusting for demographics. Further studies investigating other neighborhood characteristics and more nuanced markers of Hispanic heterogeneity (e.g., acculturation) are warranted to understand factors underlying aging and HIV-related neurocognitive disparities among diverse older adults.