To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Standardized assessment measures can provide data to inform a diagnosis of Autism Spectrum Disorder (ASD). Most measures assessing ASD characteristics rely on some degree of behavioral response to sound (e.g., responding to name, demonstrating listening response), and are often not appropriate for use with children who are Deaf and Hard of Hearing (DHH), especially with individuals who use signed languages. Few studies have reported on the Behavioral Assessment System for Children, Third Edition (BASC-3) for DHH children, and we aim to describe BASC-3 profiles in children with ASD who are DHH.
Participants and Methods:
Participants include eight DHH patients diagnosed with ASD through interdisciplinary team evaluations by developmental-behavioral pediatricians, speech-language pathologists, and neuropsychologists with expertise in DHH child development. Demographics include a mean age of 6.17 years, and 62.5% were Male. Self-reported racial distribution was 75% White, 12.5% Black and 12.5% declined to answer. Average Area Deprivation Index (marker of socioeconomic status) was 32.13%tile. As a part of the evaluation, parents rated their children using the BASC-3. Languages include spoken English (75%) and American Sign Language (25%). Relevant co-occurring
neurodevelopmental/psychological diagnoses include Global Developmental Delay (n=1), Moderate Intellectual Disability (n=1), and Depression (n=1). Types of hearing loss include sensorineural (75%), conductive (12.5%), and mixed (12.5%). Three participants had different degrees of bilateral hearing loss in each ear: mild sloping-severe, moderate rising-mild (n=1), profound, moderate rising-normal level (n=1), and profound, moderate (n=1). Four participants had the same level of hearing loss in both ears: moderate-moderately severe (n=1), moderately severe-severe (n=1), severe-profound (n=1), and profound (n=1). One child had a unilateral moderate hearing loss. Technology utilized: unilateral hearing aid (n=2), bilateral hearing aids (n=2), unilateral cochlear implant (n=1), bilateral cochlear implants (n=2), and bimodal technology (n=1). BASC-3 scales of interest in this study were the developmental social disorders scale (DSD), Autism probability index (AUI), clinical scales, and adaptive scales. BASC-3 scores were standardized using General Combined norms and means were plotted.
Results:
BASC-3 mean scores on clinical scales were elevated (T>60) on Atypicality (M=71), Hyperactivity (M=63), Withdrawal (M=63), and Attention Problems (M=65) in children with ASD who are DHH in this sample. BASC-3 mean scores on adaptive scales were below threshold (T<40) on Social Skills (M=37), Functional Communication (M=39), and overall Adaptive Skills (M=39). DSD scores were in the at-risk (T>60<70) range for 2 out of 8 cases and clinically significant (T>70) for 5 out of 8 cases.
The AUI was clinically significant for 2 out of the 3 cases within the age range for reporting AUI data.
Conclusions:
In this preliminary sample of DHH children with a confirmed diagnosis of ASD by comprehensive specialized interdisciplinary clinical evaluations, parent ratings on the BASC-3 were consistent with what is known about BASC-3 profiles in hearing children diagnosed with ASD. Our findings suggest it may be helpful to review the DSD, AUI, clinical scales, and adaptive skills scales profiles when assessing DHH children at risk for ASD. Further research, including a larger sample size and assessment of language differences among participants, is necessary.
Females outperform males on verbal memory tests across the lifespan. Females also exhibit greater Alzheimer’s disease (AD) pathology at preclinical stages and faster atrophy and memory decline during disease progression. Synaptic factors influence the accumulation of AD proteins and may underpin cognitive resilience against AD, though their role in sex-related cognitive and brain aging is unknown. We tested interactive effects of sex and genetic variation in SNAP-25, which encodes a presynaptic protein that is dysregulated in AD, on cognition and AD-related biomarkers in cognitively unimpaired older adults.
Participants and Methods:
Participants included a discovery cohort of 311 cognitively unimpaired older adults (age mean [range]=70 [44-100]; 56% female; education mean=17.3 years; 24% APOE-e4+), and an independent, demographically-comparable replication cohort of 82 cognitively unimpaired older adults. All participants completed neurological examination, informant interview (CDR=0), neuropsychological testing, and blood draw. Participants were genotyped for the SNAP-25 rs105132 (T→C) single-nucleotide polymorphism via Sequenom (discovery cohort) or Omni 2.5M (replication cohort). In vitro models show the C-allele is associated with increased SNAP-25 expression compared to T/T genotype. A subset of the discovery cohort completed structural MRI (n=237) and florbetapir Aβ-PET (n=97). Regression analyses across cohorts examined the interaction of sex and SNAP-25 genotype (T/T homozygotes [53% prevalence] vs. C-carriers [47% prevalence]) on cognitive z-scores (verbal memory, visual memory, executive function, language), adjusting for age, education, APOE-e4, and APOE-e4 x sex. Discovery cohort models also examined sex-dependent effects of SNAP-25 on temporal lobe volumes and Aβ-PET positivity.
Results:
SNAP-25 T/T vs. C-carriers did not differ on demographics or APOE-e4 status across cohorts or within sexes. Sex interacted with SNAP-25 to predict verbal memory (p=.024) and language (p=.008) in the discovery cohort, with similar verbal memory differences observed in the replication cohort. In sex-stratified analyses, C-carriers exhibited better verbal memory than T/T carriers among females (d range: 0.41 to 0.64, p range: .008 to .046), but not males (d range: 0.03 to 0.12, p range: .499 to .924). In SNAP-25-stratified analyses, female verbal memory advantages were larger among C-carriers (d range: 0.74 to 0.89, p range: <.001 to .034) than T/T (d range: 0.13 to 0.36, p range: .022 to .682). Sex also interacted with SNAP-25 to predict Aβ-PET positivity (p=.046) such that female C-carriers exhibited the lowest prevalence of Aβ-PET positivity (13%) compared to other groups (23% to 35%). C-carriers exhibited larger temporal lobe volumes across sex, yet this effect only reached statistical significance among females (females: d=0.41, p=.018; males: d=0.26, p=.179). In post-hoc analyses, larger temporal lobe volumes were selectively associated with better verbal memory in female C-carriers (β=0.36, p=.026; other groups: |βs|<0.10, ps>.538).
Conclusions:
Among clinically normal older adults, we demonstrate female-specific advantages of carrying the SNAP-25 rs105132 C-allele across cognitive, neural, and molecular markers of AD. The rs105132 C-allele putatively reflects higher endogenous levels of SNAP-25. Our findings suggest a female-specific pathway of cognitive and neural resistance, whereby higher genetically-driven expression of SNAP-25 may reduce likelihood of amyloid plaque formation and support verbal memory, possibly through fortification of temporal lobe structure.
Cognitive reserve (CR) refers to how flexibly and efficiently the individual makes use of available brain resources. Early-life education, midlife social and occupational activities, and later-life cognitive and social interactions are associated with greater CR. Years of education, premorbid intellectual (IQ) functioning, linguistic ability, and occupational complexity are often used as proxies of CR. CR theory seeks to explain discrepancies between the extent of disease pathology and clinical presentation amongst individuals with dementia. In the presence of Alzheimer’s Disease (AD) pathology, higher CR is associated with slower declines in executive functioning (EF). The current study examined the correlation between CR and EF performance across various stages of dementia severity as measured by the total score on the Clinical Dementia Rating Scale (CDRS).
Participants and Methods:
The study cohort consisted of 269 individuals who had completed measures of EF and the CDRS from phase 1 of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Individuals who scored less than 2 on the CDRS were included in the MCI group (n=197), while individuals that scored 2 or higher on the CDRS were included in the dementia group (n=73). A simple linear regression was utilized to compare the MCI group to dementia group across CR and EF performance.
Results:
There was significant correlation between CR and EF performance in the MCI group as quantified on total CDRS score (F (200) = .353, p = .0, p < .05). CR was not observed to be predictive of EF in the dementia group (F (200) = .031, p = .666, p > .05).
Conclusions:
Findings are consistent with prior research suggesting CR is protective during early stages of dementia, but not in the later disease stages. As prior research has shown the expression of dementia is based on a complex interaction between genetic and lifestyle factors that are unique to each person, future research exploring the potentially protective role of CR amongst pre-symptomatic adults with a genetic predisposition for developing dementia may expand our understanding of the potential role of CR on dementia prevention and progression.
A fundamental challenge for people with severe mental illness (SMI) is how to deal with cognitive impairments, which are common in this population and limit daily functioning. Cognitive remediation (CR) is a psychological intervention that targets these cognitive impairments to improve everyday functioning. However, reduced neural plasticity in people with SMI might hinder newly learned cognitive skills to sustain. Transcranial Direct Current Stimulation (tDCS) can promote this neural plasticity, which could enhance learning and result in longer-lasting improvements in cognitive and daily functioning. This study aimed to investigate the acceptability of the combination of CR and tDCS for people with severe mental illness who live in residential psychiatric facilities.
Participants and Methods:
We interviewed participants of the ongoing HEADDSET pilot trial. In this pragmatic, randomized, controlled pilot trial, participants (individuals with SMI, 18 years or older, living in psychiatric facilities) received CR in combination with concurrent active tDCS (n = 13) or sham tDCS (n = 13) twice weekly for 16 weeks (32 sessions in total). We invited participants who finished the trial’s training period (n = 16) to participate in the interviews. According to the Theoretical Framework of Acceptability (Sekhon et al., 2017), we assessed seven components of acceptability: Affective attitude, burden, intervention coherence, ethicality, opportunity costs, perceived effectiveness, and self-efficacy.
Results:
Twelve of the 16 participants participated in the interviews: seven completers (attended at least 20 of the 32 sessions; M = 22.7, range = 20-25) and five non-completers (M = 11.6, range = 9-15). The reasons for not completing the protocol were mainly unrelated to the training (i.e., prolonged illness, substance abuse, personal circumstances). Only one participant did not complete the training because of its intensity. Independent of whether participants completed the intervention, they were positive about the training. They reported that they liked the CR program CIRCuiTS, that participating in the training was not a burden and that, in their opinion, the training could help others. Moreover, all participants observed improvement in their cognitive functioning, and six individuals (three completers and three non-completers) observed improvements in their everyday life (e.g., fewer problems with doing groceries, being more organized, and being able to concentrate and read a book). Overall, the participants would recommend the training to others. Non-completers of the intervention would recommend the CR with tDCS, while completers neither recommended nor advised against the addition of tDCS. Participants who understood and could explain how the training works reported more improvements in daily life, were better at formulating their treatment goals, and stated that the treatment goals were more relevant to them compared to the participants who were unable to do so.
Conclusions:
The combined intervention of CR and tDCS was acceptable to individuals with severe mental illness, the participation in the training was no burden to both completers and non-completers, and participants reported personal benefits for their cognitive functioning and everyday life. Future studies should investigate the effectiveness of the intervention in larger randomized controlled trials.
To evaluate the feasibility, usability, and preliminary validity of a digital phenotyping protocol to capture everyday cognition and activities in vivo among older adults.
Participants and Methods:
Eight participants (M age=69.1 + 2.6; M education=18.0 + 1.4; 50% female; 88% non-Hispanic White) with normal cognition or mild cognitive impairment used an open-source smartphone application (mindLAMP) to passively and continuously capture sensor data including global positioning system (GPS) trajectories for a 4-week study period. Baseline neuropsychological tests and measures of depression, self-reported cognitive decline and mobility patterns were collected as external validators for digital data. Participants downloaded mindLAMP onto their smartphones and resumed their daily routines for 4 weeks before removing mindLAMP and completing a debriefing questionnaire. A cognitive composite was derived by averaging T-scores across domains of attention, executive functioning, processing speed, memory, and language. GPS raw data were processed to generate monthly average and standard deviation mobility metrics for each participant, including time spent at home, distance travelled, radius of gyration, flight length, and circadian routine. Feasibility and usability findings are presented along with correlation coefficients >.4 between GPS metrics and external validators.
Results:
100% of enrolled participants completed the 4-week study without requesting to withdraw. Usability ratings ranged from poor to excellent. 75% of participants agreed that mindLAMP was easy to use, whereas only 1 participant enjoyed using mindLAMP. 100% of participants were satisfied with the study team’s explanation of procedures, privacy safeguards, data encryption methods and risks/benefits, reflected in an average score of 98.8% on the comprehension of consent quiz. No participants reported feeling uncomfortable, suspicious, or paranoid due to the study application running on their smartphone. No participants endorsed new problems using their smartphone, though 75% reported charging it more frequently during the study period. On average each day, participants spent 1121 + 227 minutes at home, travelled 38727 + 36210 geodesic units, and had 201 + 149 minutes of missing GPS data. Overall, greater amounts of activity (monthly average) and higher variability (monthly standard deviation) in GPS metrics were associated with better outcomes. Specifically, less time spent at home, greater distance travelled, larger radius of gyration, greater flight length, and greater variability in home time, distance travelled, radius of gyration and flight length were associated with less depression, less self-reported cognitive decline, better cognition, and greater self-reported mobility (.40< |r| <.69). On the other hand, greater circadian routine was associated with more self-reported cognitive decline (r=.66) and less self-reported mobility (r=-.43).
Conclusions:
Smartphone digital phenotyping is a feasible and acceptable method to capture everyday activities in older adults. Continuous collection of data from personal devices warrants caution; however, participants denied privacy concerns and expressed an overall positive experience. High frequency GPS data collection impacts battery life and should be considered among relative risks and confounds to naturalistic assessment. Patterns of behavior from passive smartphone data show promise as an unobtrusive method to identify cognitive risk and resilience in older adults. Subsequent analyses will evaluate additional sensor metrics across a larger and more heterogeneous cohort.
Let $\Omega \subset \mathbb {R}^N$ ($N\geq 3$) be a $C^2$ bounded domain and $\Sigma \subset \partial \Omega$ be a $C^2$ compact submanifold without boundary, of dimension $k$, $0\leq k \leq N-1$. We assume that $\Sigma = \{0\}$ if $k = 0$ and $\Sigma =\partial \Omega$ if $k=N-1$. Let $d_{\Sigma }(x)=\mathrm {dist}\,(x,\Sigma )$ and $L_\mu = \Delta + \mu \,d_{\Sigma }^{-2}$, where $\mu \in {\mathbb {R}}$. We study boundary value problems ($P_\pm$) $-{L_\mu} u \pm |u|^{p-1}u = 0$ in $\Omega$ and $\mathrm {tr}_{\mu,\Sigma}(u)=\nu$ on $\partial \Omega$, where $p>1$, $\nu$ is a given measure on $\partial \Omega$ and $\mathrm {tr}_{\mu,\Sigma}(u)$ denotes the boundary trace of $u$ associated to $L_\mu$. Different critical exponents for the existence of a solution to ($P_\pm$) appear according to concentration of $\nu$. The solvability for problem ($P_+$) was proved in [3, 29] in subcritical ranges for $p$, namely for $p$ smaller than one of the critical exponents. In this paper, assuming the positivity of the first eigenvalue of $-L_\mu$, we provide conditions on $\nu$ expressed in terms of capacities for the existence of a (unique) solution to ($P_+$) in supercritical ranges for $p$, i.e. for $p$ equal or bigger than one of the critical exponents. We also establish various equivalent criteria for the existence of a solution to ($P_-$) under a smallness assumption on $\nu$.
Older adults represent 50% of surgical patients and are disproportionately at risk of poor cognitive outcomes after surgery including delirium, accelerated cognitive decline, and dementia. Delirium alone is estimated to occur in up to 50% of older adults postoperatively, while research indicates it is preventable in 30-40% of cases. Individuals with pre-existing cognitive impairments or neurodegenerative diseases are at the highest risk of such outcomes, but (1) cognitive diagnoses are grossly underrepresented in patients' medical records, and (2) routine preoperative cognitive clearance remains rare. The purpose of this presentation is to demonstrate the extent and nature of cognitive vulnerability in older adults preparing for elective surgery within a tertiary care hospital. A case series is also reviewed to illustrate varying surgical outcomes with and without consideration of preoperative cognitive risk.
Participants and Methods:
This presentation incorporated IRB-approved and data honest broker management to assess diagnoses and cognitive profiles of adults age 65 and older electing surgery with anesthesia between January 2018 and December 2019. Data were assessed across two phases of the Perioperative Cognitive Anesthesia Network (PeCAN) program within the University of Florida and UF Health. First, data from the preoperative anesthesia clinic were reviewed for the percentage of patients with cognitive difficulties within the patient problem list. Second, based on neuropsychological domains, the cognitive profiles of patients assessed by neuropsychologists within the preoperative anesthesia clinic were divided into primary attention, primary memory, or combined memory attention. From these patients, the presenter highlight cases to demonstrate how individuals with cognitive difficulties can be provided care by a multidisciplinary team to mitigate the presence of postoperative complications.
Results:
Of 14,794 older adults entering the tertiary care medical center for surgical procedures, 4% (n=591) of the sample had ICD cognitive or neurodegenerative codes in the record. When a comprehensive neurobehavioral assessments were conducted on 1,363 of these presurgical patients, 70% had confirmed cognitive deficits on neuropsychological testing. These deficits included primary attention and executive deficits (12%), primary memory impairment (27%), or both attention and memory impairment (31%). Cases from these patients are reviewed and highlight how preoperative cognitive risk status can inform conservative perioperative practices including opioid-sparing analgesia, depth of anesthesia monitoring, and postoperative inpatient geriatric medicine consultation.
Conclusions:
Medical records listed cognitive diagnoses in 4% of hospital preoperative medical records, yet neuropsychological assessment of a subset of cases revealed a markedly higher rate of impairment. Patients with preoperative cognitive assessment show cognitive symptoms consistent with known neurological disorders of aging including Alzheimer's disease and cerebrovascular disease. Appreciation of pre-existing neurocognitive disorders can alter perioperative practices to prevent or reduce the risk of delirium and other postoperative neurocognitive changes. These data and cases reviewed will highlight how neuropsychology can be involved in perioperative care and champion perioperative interventions for perioperative "rescues".
Brain and cognition vary and change markedly across the lifespan. I use magnetic resonance imaging and cognitive data to show that while general age trends can be identified in brain and cognition, there are great individual differences through life, and these are influenced by several factors, including at early life stages. Hence, in some respects, aging starts in the womb. Recognizing and understanding the impact of early relative to later stage factors on neurocognitive lifespan differences, changes and aging is a major challenge. Adequately meeting this challenge is crucial both to understand the mechanisms at work early in life, and to identify what and how residual variance may be affected by later life factors. Thus, knowledge of the timing of influences on brain and cognition along the lifespan is needed to develop realistic plans for prevention and intervention to optimize brain and cognition at different ages. I discuss how example factors such as prenatal drug exposure, birth weight, genetics, education, income, and “baseline” general cognitive ability, as well as cognitive training interventions relate to differences and/or changes in the human brain along the lifespan.
Example findings are drawn from the studies of the Center for Lifespan Changes in Brain and Cognition (LCBC), where we follow individuals ranging in age from 0 to 100 years. Our studies are in part linked to Norwegian registry data, including the Mother, Father and Child Cohort study (Moba), the Norwegian Twin Registry and the Medical Birth Registry. Linkage to registry data on normal variation of pre-and perinatal characteristics, as well as studies of groups with known early biomedical risk, such a prenatal drug exposure, enable investigation of the possible impact of neurodevelopmental factors on brain and cognitive function through the entire life course. I also discuss how genetically informed studies of brain and cognition sampling broader age spans may contribute to our understanding of the timing of influences. Selectivity of samples constitute a challenge to generalizability in all human research. I discuss how research across international databases can, beyond boosting power and detect consistency of effects, help us appreciate there are diverse associations of possible factors of influence on different groups. This is crucial, as we need to understand to what extent various factors’ association with brain and cognition are universal or cohort-specific, prior to mechanistic understanding. Thus, in this presentation, I will discuss how transdisciplinary, longitudinal, multi-method, and multi-cohort research can illuminate factors that may influence brain and cognition, and their potential timing, in a lifespan perspective.
Upon conclusion of this course, learners will be able to:
1. Recognize that differences in brain and cognition even at advanced age may reflect early life factors, rather than, or in addition to, differences in brain and cognitive change with age
2. Describe consistency as well as diversity of factors’ (such as SES) associations with brain and cognition across cohorts of different age and origin.
3. Evaluate differences in factors present early in life, including at birth (“different offset”) before attributing variance in brain and cognitive function to changes with age (“different slope”)
Heart rate variability (HRV) can be an indicator of the flexibility of the central and autonomic nervous systems. Heart rate variability biofeedback (HRV-BF) has been shown to validate the neuro-peripheral relationship and enhance the interaction between top-down and bottom-up processes. Few previous studies have focused on the treatment outcomes of HRV-BF in traumatic brain injury, and such studies have been mostly limited to pilot studies or case reports. The purpose of this study is to investigate the efficacy of HRV-BF for neuropsychological functioning in patients with mild traumatic brain injury (mTBI).
Participants and Methods:
Forty-one patients with mTBI were referred from the neurosurgery outpatient program and randomly assigned to a psychoeducation group or a HRV-BF intervention group. The psychoeducation group received standard medical care and one 60-minute psychoeducation session after brain injury. The HRV-BF group received standard medical care and one 60-minute session of the HRV-BF intervention weekly for 10 weeks. All participants received performance-based and self-reported neuropsychological measures of memory, executive function, mood, and information processing at week 1 of injury (pretest) and week 12 (posttest).
Results:
Participants in HRV-BF improved significantly after the intervention compared with the psychoeducation group on the Verbal Learning Test, Frontal Assessment Battery, Verbal Fluency Test, Paced Auditory Serial Addition Test, Trail Making Test, Dysexecutive Questionnaire, Depression Inventory, and Checklist of Post-concussion Symptoms.
Conclusions:
HRV-BF was found to be an efficacious and efficient intervention for improving neuropsychological functioning in patients with mTBI and a potential candidate for mTBI rehabilitation.
Children with attention/deficit-hyperactivity disorder (ADHD) exhibit motivational and cognitive impairments that affect daily life functioning. These impairments may reflect a deficit in action-control; the process by which voluntary actions are selected and executed based on prior reinforcement learning. It consists of parallel opposing processes; goal-direction and habit formation. Using the outcome-devaluation paradigm, we previously showed that children with ADHD rely on reflexive habitual, at the expense of goal-directed, behavior to deploy their actions. The current study investigates action-control using a contingency degradation paradigm, which involves outcome overvaluation as opposed to outcome devaluation. We hypothesize that children with ADHD will display a habitual behavior, while healthy controls (HC) will use goal-directed behavior to control their actions.
Participants and Methods:
We tested 19 ADHD and 14 HC participants (age 6-10 years) for this study. Children with ADHD were recruited from Children’s Specialized Hospital and underwent a structured clinical diagnosis. All participants were screened for ADHD and other neurologic or psychiatric conditions that could contribute to attention impairment using the SNAP-IV rating scale. Participants completed a set of the Woodcock-Johnson® IV assessments. They were tested using an outcome-overvaluation computer-based task. During learning, participants acquired stimulus-reward associations in the acquisition phase, as well as the overvaluation phase. In the latter, one of the rewards was delivered in a similar contingency to the acquisition phase (valued), while the other reward was randomly accompanied by an extra reward in 10% of the trials (overvalued). After the overvaluation phase, participants were presented with two stimuli (associated with a valued, and an overvalued outcome) and were asked to choose one stimulus in extinction. Choosing the overvalued at a higher rate was assigned as goal-directed behavior, while choosing both stimuli at the same rate was assigned as habitual behavior.
Results:
Independent-samples t-test showed that children with ADHD scored significantly higher than HC in the following measures: ADHD_inattention, ADHD_hyperactive/Impulsive, ADHD_combined, inattention/overactivity, Conner’s index, inattention domain, hyperactive/impulsive domain, and general anxiety disorder screening (P-value for all is <0.001). Results from the computer-based task showed that both groups acquired action-outcome associations during the first two phases of the task. During the extinction phase, HC, as compared to ADHD, responded at a higher rate on the stimuli that were associated with the overvalued outcome (t(31)=2.1, p=0.043); indicating higher tendency to show goal-directed behavior. Further, paired-samples t-test showed that there was no significant difference between response rate on the valued vs. overvalued stimuli in the ADHD group (t(18)=1.027, p=0.318), while there was a difference trending towards significance in response rate in the HC group (t(13)=-2.00, p=0.067). These results show that ADHD responded habitually, while HC responses were goal-directed.
Conclusions:
Our results indicate that children with ADHD are less likely than HC to engage in goal-directed behavior as opposed to habitual responding. This is consistent with our previous research highlighting a deficit in action-control in ADHD.
Hispanics account for approximately 19% of the US population and are the second largest ethnic group in the United States, yet they remain underrepresented in neuropsychology research. Common recruitment barriers include language, fear/mistrust, and unfamiliarity with neuropsychology. These recruitment challenges then interfere with the development of measures normed on Spanish-speaking Hispanics. The research team for a Spanish-based neuropsychological study at a pediatric medical setting in North Texas utilized several methods to maximize recruitment of Hispanics and identify the most successful strategies. It was hypothesized that internal recruitment efforts would have the best outcome.
Participants and Methods:
Recruitment of healthy Spanish-speaking children between 6.0 and 17.11 years old began in October 2021 and continues to date. Participants have been recruited within the Dallas Fort-Worth (DFW) metroplex using internal efforts within the pediatric medical center and external efforts in the community-at-large. Internal recruitment efforts have included: 1) setting up flyers at 19 different ambulatory clinics, 2) emailing study flyer to several internal groups, and 3) sharing information during a Hispanic workgroup meeting. Community-based efforts have included collaborating with: 1) a Spanishimmersion private elementary school (i.e., shared information with parents via email and sent flyers home with students), 2) three mental health colleagues (i.e., displayed study flyers within their clinic space and promoted study through word-of-mouth), 3) a local city council (i.e., featured flyer in electronic newsletter), and 4) a non-profit community organization (i.e., shared information and flyer through mass-text messages, social media post, and mass email to subscribers).
Results:
To date, 74 parent-child pairs have made one-time contact with the research team to inquire about the study and 55 have completed a second contact with initial screener by phone (19 lost to follow up). Of the screened families, 58% heard of the study through the non-profit organization, 31% through the Spanish-immersion private school, and 11% from internal recruitment efforts.
Conclusions:
Although we hypothesized that internal -based recruitment within the medical institution would be most fruitful, our findings did not support this hypothesis. A possible explanation could be that children recruited from medical clinics may not meet criteria for participation in our study (i.e., healthy children). Another possible reason may be that flyer-based recruitment in a medical clinic is too passive or impersonal. Recruitment through community organizations with sources known and trusted by participants was found to be the most successful method to recruit potential participants. Considering these findings, our approach to recruitment will move away from passive and indirect methods of recruitment (i.e., flyers in clinics) and emphasize alliance with community-based organizations to promote trust building and collaborative relationships between researchers, community organizations, and Hispanic research participants.
The Cambridge Cognitive Examination for Down's Syndrome (CAMCOG-DS) was developed to assess cognitive functioning and dementia-related cognitive decline in people with Down's Syndrome (DS). It has been translated into different languages and is often used in international studies. Although adapted for people with intellectual disabilities (ID), many tasks involve verbal responses and instructions are presented orally. Therefore, the administration for people with severe language deficits can be challenging. The aim of this retrospective data analysis is to examine the influence of language ability and reasoning on CAMCOG-DS performance. Study 1 examined the relationship between CAMCOG-DS, picture naming, single word comprehension and reasoning in adults with DS. Study 2 replicates and broadens the findings in a sample of children and adults with DS.
Participants and Methods:
Study 1 included 40 adults with DS between 18 and 51 years (M = 28.6, SD = 8.4). 25 had a mild and 15 a moderate ID. CAMCOG-DS, the short form of the Boston Naming test (BNT), a test for single word comprehension from the Werdenfelser Testbatterie (WTB) and the Colored Progressive Matrices (CPM) were administered. Study 2 included 38 participants between 8 and 59 years (23 children, M = 11.4; 15 adults; M = 31.3). 3 had a borderline, 23 a mild, and 12 a moderate ID. The same tasks as in Study 1 were applied, but the CPM was replaced by its successor, the Raven's 2.
Results:
In Study 1, participants with mild ID performed better in all tasks than those with moderate ID (p < .05). Moderate relationships were found between CAMCOG-DS total score and the language tasks (r = .56 and r = .46), which remained significant when level of ID was controlled for. There was also a moderate relationship between CAMCOG-DS and reasoning (r = .46). Regression analysis showed that BNT performance predicted CAMCOG-DS performance (R2 = .31). In Study 2, those with mild ID, compared to those with moderate ID, performed better in all tasks (p < .05), however, regarding the CAMCOG-DS and language tasks, this effect was larger in adults than in children. Adults performed better than children in the CAMCOG-DS and BNT (p < .05), but not in single word comprehension or reasoning. As in Study 1, moderate to large correlations were revealed between CAMCOG-DS and language tasks and between CAMCOG-DS and reasoning (r > .52), remaining significant when age and ID level were controlled for. Regression analysis showed that both naming and reasoning but not single word comprehension or age predicted CAMCOG-DS performance (R2 = .69), however, performance was best predicted by naming (R2 = .65).
Conclusions:
Our results suggest that language ability and reasoning relate to CAMCOG-DS performance, which is best predicted by BNT picture naming. This should be considered in CAMCOG-DS interpretation, as the capabilities of patients with lesser language ability might be underestimated. Future developments of dementia assessments for people with ID should include more nonverbal tasks.
Performance validity testing (PVT) is important in neuropsychological evaluations to ensure accurate interpretation of performance. While research shows children pass PVTs with adult cut-offs, PVTs are more commonly used with adults (Lippa, 2018). The Test of Memory Malingering (TOMM), a standalone PVT, is commonly used with adults and children (DeRight & Carone, 2015). The Reliable Digit Span (RDS), an embedded PVT derived from the Digit Span subtest of the Wechsler Intelligence Scales (Wechsler Intelligence Scale for Children-4th Edition, WISC-IV; Wechsler, 2003), is less commonly used with children (DeRight & Carone, 2015). RDS cut-off scores are associated with an increased rate of false positives in children, indicating mixed results regarding the clinical utility in pediatric populations (Welsh et al., 2012). Research shows that youth with a history of concussion (HOC) may demonstrate suboptimal effort for many reasons (e.g., external incentives, boredom, pressure), thus highlighting the need to investigate the utility of PVTs in this population (Araujo et al., 2014; DeRight & Carone, 2015). The present study aimed to examine the clinical utility of RDS in detecting poor effort on the TOMM in youth athletes with a HOC.
Participants and Methods:
Participants included 174 youth athletes aged 8 to 18 (20.1% female; 42.5% people of color (POC)) who completed a baseline neuropsychological battery that included the TOMM and WISC-IV Digit Span. Of the sample, 29 youth athletes reported a HOC (13.8% female; 37.9 POC). RDS was calculated for each Digit Span administration, and sensitivity (SN) and specificity (SP) were calculated for RDS when invalid performance was operationalized by a more stringent cut-off score of <49 to increase the SN of the TOMM Trial 1 (Stenclik et al., 2013). Receiver operator characteristics (ROC) curve analysis determined whether RDS performance accurately predicted participants’ performance on the TOMM.
Results:
The ROC curve analysis resulted in an area under the curve (AUC) of just 0.427 for RDS. A cut-off score of <7 (as suggested by Kirkwood et al. (2011)) for RDS results in 100% SN, 8.3% SP, 5% positive predictive validity (PPV), and 95% negative predictive validity (NPV). However, a cut-off score of <9 for RDS results in 75% SN, 15% SP, 25% PPV, and 75% NPV.
Conclusions:
Little research shows the utility of different PVTs predicting children’s performance on other PVTs, despite evidence that children with a HOC are vulnerable to variable or insufficient effort (Araujo et al., 2014; DeRight & Carone, 2015). In a sample of 29 youth athletes with a HOC, RDS predicted TOMM performance at rates worse than chance. While RDS has advantages as an embedded PVT, its limited ability to predict performance on a standalone PVT suggests interpreting with great caution. These findings highlight the importance of implementing multiple PVTs throughout testing to ensure accurate findings and interpretations, particularly in youth with a HOC. The small sample size is a limitation that possibly impacted the ability of RDS to predict TOMM performance. Further research is needed to understand the utility of RDS as a predictor of PVT performance in different populations. Replication of these findings with a larger sample size is needed to provide confirmatory evidence of poor predictive performance of the RDS.
There is a well-established relationship in the literature between cognitive impairment and functional disability, such that, increased cognitive impairment is associated with diminished capacity to perform daily activities independently. However, there has been limited research on the relationship between cognitive impairment and daily functioning in older adults from an Indian population, or differences between Indian and U.S samples. The relationship may differ across these two populations due to their unique cultures. For example, India and the United States have significantly different social systems and family structures, with different emphases placed on the community as compared to the individual. Therefore, the role that older adults play or the support they receive within the family and society differs between the two countries and could significantly impact the relationship between cognitive ability and functional disability. The primary objective of this study is to further explore the similarities and differences in this relationship across cross-cultural populations. We hypothesized that individuals across both samples with lower cognitive functioning will have increased disability. Furthermore, we propose that the relationship between cognitive functioning and functional disability will be stronger in the U.S sample as compared to the Indian sample.
Participants and Methods:
Community-dwelling older adults were sampled through local senior centers and by convenience sampling in the United State and India, respectively (N = 40 and 36, respectively). All participants were administered the Montreal Cognitive Assessment (MoCA) to evaluate cognitive ability. Functional status was assessed using the Activities of Daily Living section of the OARS multidimensional functional questionnaire and the World Health Organization Disability Assessment Schedule (WHODAS).
Results:
A significant association between cognitive functioning and functional disability was demonstrated in the combined sample, i.e., the MoCA was correlated with OARS (r[70] = .42, p < .001) and the WHODAS (r[59] = -.32, p = .009). However, when comparing samples, significant differences in associations between the MoCA and functional measures were noted in the Indian and U.S. samples: In the Indian sample, the MoCA was not significantly correlated with either the WHODAS (r[38] = -.28, p = .09) or the OARS (r[39] = .17, p = .31). Comparatively, in the United States, the MoCA was correlated with the OARS (r[32] = .51, p = .002) and the WHODAS (r[26] = -.40, p = .04).
Conclusions:
These results, in keeping with most previous studies done in the U.S. point to a robust relationship between cognition and functional disability in the U.S sample. However, this association is substantially diminished in the Indian sample. One possible reason maybe, greater support available to older Indians may mitigate the negative effect of cognitive impairment on adaptive function. A major limitation of this study is the small sample size. Additionally, due to vast cultural differences that exist across India, the sample collected from an urban well-education population will likely not generalize to the larger country. Future research from larger and more diverse samples across the country will likely provide more valuable insight.
Early childhood is recognized as a critical window of rapid cognitive development. Unfortunately, many risk factors for atypical cognitive development may occur during this period, including genetic syndromes, congenital neuroanatomical malformations, pre- or perinatal injury, and neurological and medical disorders. The impact of these risk factors on cognitive functioning may not always map onto patterns typically observed in adults. Limited literature exists on the presentation of cognitive profiles within clinical populations in the preschool developmental period. The present study aimed to evaluate whether discrete a priori cognitive profiles consistent with common neurobehavioral syndromes emerge and are distinguishable on testing in early childhood in a mixed clinical sample. We also aimed to determine if there was a consistent association between known medical risk factors and resultant cognitive profiles.
Participants and Methods:
Participants included 163 children aged 1-5 years (M=48.5 months, SD=12.8 months) referred for neuropsychological evaluation. The sample was predominantly male (67.5%) and White (72.9%), followed by other/mixed race (11.6%), Black (9.7%), and Latino/Hispanic (5.8%). Cognitive abilities assessed included broad intellectual abilities, verbal abilities, nonverbal abilities, attention, and executive functioning. Continuous test scores were transformed into categorical ranges of performance, with scores classified as “above average,” “average,” “below average,” or “extremely low” to allow for profile classification. Theoretical clinical profiles consistent with common neurobehavioral syndromes were determined a priori by consensus among three authors (JK, AH, LM). Chi square tests of independence were conducted to compare membership across neurobehavioral diagnostic groups, clinical profile groups, and medical groups.
Results:
Based on cognitive data, 55.2% of the sample (n=90) was classified as Global Developmental Delay/Intellectual Disability, 19.6% (GDD/ID; n=32) was classified as
Language Disorder, and 18.4% (n=30) was classified as Typical Cognitive Development. 4.3% (n=7) of the sample was classified as Attention-Deficit/Hyperactivity Disorder (ADHD), and 2.5% (n=4) was classified as Nondominant Hemisphere Dysfunction. As hypothesized, cognitive profile group membership was consistent with diagnostic impressions, as actual clinical diagnoses of Language Disorder, ADHD, GDD/ID, or a classification of typical cognitive development were significantly associated with theorized cognitive profile based on test performance alone (x2 (1,20) = 147.29, p < .001). Cognitive profile group membership was also significantly associated with referral source (1,28) = 62.88, p < .001) and the presence of a neurological disorder (1,4) = 14.64, p =.006).
Conclusions:
Findings support the presence of specific theorized cognitive profiles in preschoolers in a mixed clinical sample. Specifically, GDD/ID, Language Disorder, and typical cognitive development are discrete and consistently distinguishable cognitive profiles in this age range. Early life neurological risk factors are also significantly related to cognitive profile membership, suggesting that these factors may be useful in predicting cognitive development even in very young children. Future work is needed to examine the consistency of these profiles over time and their predictive value in estimating subsequent development, and the possibility of discriminating unique cognitive profiles for specific medical conditions in preschoolers.
Dementia worry (DW) is anxious rumination about personal risk for dementia. Personal experience with dementia may affect DW, such that individuals with personal experience with dementia may have higher worry about developing dementia themselves. Further, dementia knowledge (DK), including what may increase one’s dementia risk as well as treatment options for dementias, may be influenced by one’s dementia experience. Prior studies have suggested that personal experience alters the relationship of age to DW; no prior studies have examined this for DK. In the present study, we examined whether DW and/or DK were differentially related to age in older adults.
Participants and Methods:
Adults (≥ 50 years old; N=252) in Ohio and Louisiana completed an online survey. 94 participants reported no personal dementia experiences, and 158 participants endorsed having a biological relative with dementia. The sample ranged in age from 23 to 92 (M=65, SD=9.3), with 96% identifying as White and 76% holding advanced degrees. DW was measured with the Dementia Worry Scale. Dementia knowledge was measured with true or false questions about causes and treatments for dementia.
Results:
Groups did not differ in age (p=.73), education (p=.50), or perceived SES (p=.28), but did differ in gender (p=.06). The experience group had higher dementia knowledge (p=.02). In those with biological dementia experience, lower age was related to higher dementia worry (r=-.24, p=.003) and greater dementia knowledge (r=-.18, p=.03). However, in those with no experience, age was not related to either dementia worry (r=.04) or to dementia knowledge (r=.16). Dementia worry did not relate to dementia knowledge in either group (no experience r=.03, experience r=.13).
Conclusions:
Findings suggest that younger individuals who have personal experience with dementia are highly worried about personal risk for dementia, despite having higher knowledge of dementia. Further, these results demonstrate that dementia knowledge is not related to dementia worry in older individuals with or without biological dementia experience. Findings may be important for informing dementia prevention education efforts.
The presence of cognitive impairment corresponds with declines in adaptive functioning (Cahn-Weiner, Ready, & Malloy, 2003). Although memory loss is often highlighted as a key deficit in neurodegenerative diseases (Arvanitakis et al., 2018), research indicates that processing speed may be equally important when predicting functional outcomes in atypical cognitive decline (Roye et al., 2022). Additionally, the development of performance-based measures of adaptive functioning offers a quantifiable depiction of functional deficits within a clinical setting. This study investigated the degree to which processing speed explains the relationship between immediate/delayed memory and adaptive functioning in patients diagnosed with mild and major neurocognitive disorders using an objective measure of adaptive functioning.
Participants and Methods:
Participants (N = 115) were selected from a clinical database of neuropsychological evaluations. Included participants were ages 65+ (M = 74.7, SD = 5.15), completed all relevant study measures, and were diagnosed with Mild Neurocognitive Disorder (NCD; N = 69) or Major NCD (N = 46). They were majority white (87.8%) women (53.0%). The Texas Functional Living Scale was used as a performance-based measure of adaptive functioning. The Coding subtest from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS-CD) was used to measure information processing speed. Composite memory measures for Immediate Recall and Delayed Recall were created from subtests of the RBANS (List Learning, Story Memory, and Figure Recall) and the Wechsler Memory Scale-IV (Logical Memory and Visual Reproduction). Multiple regressions were conducted to evaluate the importance of memory and information processing speed in understanding adaptive functioning. Age and years of education were added as covariates in regression analyses.
Results:
Significant correlations (p < .001) were found between adaptive functioning and processing speed (PS; r = .52), immediate memory (IM; r = .43), and delayed memory (DM; r = .32). In a regression model with IM and DM predicting daily functioning, only IM significantly explained daily functioning (rsp = .24, p = .009). A multiple regression revealed daily functioning was significantly and uniquely associated with IM (rsp = .28, p < .001) and PS (rsp = .41, p < .001). This was qualified by a significant interaction effect (rsp = -.29, p = .001), revealing that IM was only associated with adaptive functioning at PS scores lower than the RBANS normative 20th percentile.
Conclusions:
Results suggest that processing speed may be a more sensitive predictor of functional decline than memory among older adults with cognitive disorders. These findings support further investigation into the clinical utility of processing speed tests for predicting functional decline in older adults.
Suicide risk among individuals with psychosis is elevated compared to the general population (e.g., higher rates of suicide attempts [SA] and completions, more severe lethality of means). Importantly, suicidal ideation (SI) seems to be more predictive of near-term and lifetime SAs in people with psychosis than in the general population. Yet, many randomized controlled trials in psychosis have excluded individuals with suicidality. Additionally, research suggests better cognitive and functional abilities are associated with greater suicide risk in psychotic disorders, which is dissimilar to the general population, but studies examining the link between cognition and suicidality are scarce. Because neuropsychological abilities can affect how individuals are able to attend to their environment, solve problems, and inhibit behaviors, further work is needed to consider how they may contribute to suicide risk in people with psychotic disorders. We sought to examine associations between neuropsychological performance and current SI and SA history in a large sample of individuals with psychosis.
Participants and Methods:
176 participants with diagnoses of schizophrenia, schizoaffective disorder, and bipolar disorder with psychotic features completed clinical interviews, a neuropsychological assessment (MATRICS Consensus Cognitive Battery subtests), and psychiatric symptom measures (Positive and Negative Syndrome Scale [PANSS]; Montgomery-Asberg Depression Rating Scale [MADRS]. First, participants were divided into groups based on their current endorsement of SI in the past month on the Colombia Suicide Severity Rating scale (C-SSRS): those with current SI (SI+; n=86) and without current SI (SI-; n=90). We also examined lifetime history of SA (n=114) vs. absence of lifetime SA (n=62). Separate t-tests, chi-square tests, and logistic regressions were used to examine associations between neuropsychological performance and the two dichotomous outcome variables (current SI; history of SA).
Results:
The SI groups did not differ on diagnosis, demographics (e.g., age, gender, race, ethnicity, years of education, premorbid functioning), or on positive and negative symptoms. The SI+ group reported more severe depressive symptoms (t(169)= -5.90, p<.001) and had significantly worse performance on working memory tests than the SI- group (t(173)=2.28, p=.024). Logistic regression revealed that working memory performance uniquely predicted current SI+ group membership above and beyond depressive symptoms (B= -.040; OR= .96; 95% CI [.93, .99]; p= .034). The SA groups did not significantly differ on demographic variables or on positive/negative symptoms, but those with a history of SA had more severe depressive symptoms (t(169)= -2.80, p=.006) and worse performance on tests of working memory (t(173)=2.16, p=.033) and processing speed (t(166)=2.28, p=.024) than did those without a history of SA. Logistic regression demonstrated that after controlling for depressive symptom severity, working memory and processing speed did not predict unique variance in SA history (p=.25).
Conclusions:
Worse working memory performance was associated with SI in the past month in individuals with psychotic disorders. Although our finding is consistent with literature in other psychiatric populations, it conflicts with existing psychosis literature. Thus, a more nuanced examination of how cognition relates to SI/SA in psychosis is warranted to identify and/or develop optimal interventions.
Early identification of individuals at risk for dementia provides an opportunity for risk reduction strategies. Many older adults (30-60%) report specific subjective cognitive complaints, which has also been shown to increase risk for dementia. The purpose of this study is to identify whether there are particular types of complaints that are associated with future: 1) progression from a clinical diagnosis of normal to impairment (either Mild Cognitive impairment or dementia) and 2) longitudinal cognitive decline.
Participants and Methods:
415 cognitively normal older adults were monitored annually for an average of 5 years. Subjective cognitive complaints were measured using the Everyday Cognition Scales (ECog) across multiple cognitive domains (memory, language, visuospatial abilities, planning, organization and divided attention). Cox proportional hazards models were used to assess associations between self-reported ECog items at baseline and progression to impairment. A total of 114 individuals progressed to impairment over an average of 4.9 years (SD=3.4 years, range=0.8-13.8). A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. Mixed effects models with random intercepts and slopes were used to assess associations between baseline ECog items and change in episodic memory or executive function on the Spanish and English Neuropsychological Assessment Scales. Time in years since baseline, the ECog items, and the interaction were key terms of interest in the models. Separate models for both the progression analyses and mixed effects models were fit for each ECog item that included age at the baseline visit, gender, and years of education as covariates.
Results:
More complaints on five of the eight memory items, three of the nine language items, one of the seven visuospatial items, two of the five planning items, and one of the six organization items were associated with progression to impairment (HR=1.25 to 1.59, ps=0.003 to 0.03). No items from the divided attention domain were significantly associated with progression to impairment. In individuals reporting no difficulty on ECog items at the baseline visit there was no significant change over time in episodic memory(p>0.4). More complaints on seven of the eight memory items, two of the nine language items, and three of the seven visuospatial items were associated with more decline in episodic memory (ps=0.003 to 0.04). No items from the planning, organization, or divided attention domains were significantly associated with episodic memory decline. Among those reporting no difficulty on ECog items at the baseline visit there was slight decline in executive function (ps=<0.001 to 0.06). More complaints on three of the eight memory items and three of the nine language items were associated with decline in executive function (ps=0.002 to 0.047). No items from the visuospatial, planning, organization, or divided attention domains were significantly associated with decline in executive function.
Conclusions:
These findings suggest that, among cognitively normal older adults at baseline, specific complaints across several cognitive domains are associated with progression to impairment. Complaints in the domains of memory and language are associated with decline in both episodic memory and executive function.
Previous research has found that subjective cognitive decline corresponds with assessed memory impairment and could even be predictive of neurocognitive impairment. The purpose of this study was to investigate whether a single self-report item of subjective cognitive decline corresponds with the results of a performance-based measure of episodic memory.
Participants and Methods:
Older adults (n = 100; age 60-90) were given the single item measure of subjective cognitive decline developed by Verfaille et al. (2018).
Results:
Those who endorsed subjective cognitive decline (n = 68) had lower scores on the CVLT-II long delay free recall than those who did not endorse such a decline (n = 32). Additionally, older adults with a neurocognitive diagnosis believed their memory was becoming worse at a higher proportion than those without.
Conclusions:
While a single item of subjective cognitive decline should not be substituted for a comprehensive evaluation of memory, the results suggest that it may have utility as a screening item.