58 results
53 2-Back Performance Does Not Differ Between Cognitive Training Groups in Older Adults Without Dementia
- Nicole D Evangelista, Jessica N Kraft, Hanna K Hausman, Andrew O’Shea, Alejandro Albizu, Emanuel M Boutzoukas, Cheshire Hardcastle, Emily J Van Etten, Pradyumna K Bharadwaj, Hyun Song, Samantha G Smith, Steven DeKosky, Georg A Hishaw, Samuel Wu, Michael Marsiske, Ronald Cohen, Gene E Alexander, Eric Porges, Adam J Woods
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 360-361
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Cognitive training is a non-pharmacological intervention aimed at improving cognitive function across a single or multiple domains. Although the underlying mechanisms of cognitive training and transfer effects are not well-characterized, cognitive training has been thought to facilitate neural plasticity to enhance cognitive performance. Indeed, the Scaffolding Theory of Aging and Cognition (STAC) proposes that cognitive training may enhance the ability to engage in compensatory scaffolding to meet task demands and maintain cognitive performance. We therefore evaluated the effects of cognitive training on working memory performance in older adults without dementia. This study will help begin to elucidate non-pharmacological intervention effects on compensatory scaffolding in older adults.
Participants and Methods:48 participants were recruited for a Phase III randomized clinical trial (Augmenting Cognitive Training in Older Adults [ACT]; NIH R01AG054077) conducted at the University of Florida and University of Arizona. Participants across sites were randomly assigned to complete cognitive training (n=25) or an education training control condition (n=23). Cognitive training and the education training control condition were each completed during 60 sessions over 12 weeks for 40 hours total. The education training control condition involved viewing educational videos produced by the National Geographic Channel. Cognitive training was completed using the Posit Science Brain HQ training program, which included 8 cognitive training paradigms targeting attention/processing speed and working memory. All participants also completed demographic questionnaires, cognitive testing, and an fMRI 2-back task at baseline and at 12-weeks following cognitive training.
Results:Repeated measures analysis of covariance (ANCOVA), adjusted for training adherence, transcranial direct current stimulation (tDCS) condition, age, sex, years of education, and Wechsler Test of Adult Reading (WTAR) raw score, revealed a significant 2-back by training group interaction (F[1,40]=6.201, p=.017, η2=.134). Examination of simple main effects revealed baseline differences in 2-back performance (F[1,40]=.568, p=.455, η2=.014). After controlling for baseline performance, training group differences in 2-back performance was no longer statistically significant (F[1,40]=1.382, p=.247, η2=.034).
Conclusions:After adjusting for baseline performance differences, there were no significant training group differences in 2-back performance, suggesting that the randomization was not sufficient to ensure adequate distribution of participants across groups. Results may indicate that cognitive training alone is not sufficient for significant improvement in working memory performance on a near transfer task. Additional improvement may occur with the next phase of this clinical trial, such that tDCS augments the effects of cognitive training and results in enhanced compensatory scaffolding even within this high performing cohort. Limitations of the study include a highly educated sample with higher literacy levels and the small sample size was not powered for transfer effects analysis. Future analyses will include evaluation of the combined intervention effects of a cognitive training and tDCS on nback performance in a larger sample of older adults without dementia.
2 Higher White Matter Hyperintensity Load Adversely Affects Pre-Post Proximal Cognitive Training Performance in Healthy Older Adults
- Emanuel M Boutzoukas, Andrew O’Shea, Jessica N Kraft, Cheshire Hardcastle, Nicole D Evangelista, Hanna K Hausman, Alejandro Albizu, Emily J Van Etten, Pradyumna K Bharadwaj, Samantha G Smith, Hyun Song, Eric C Porges, Alex Hishaw, Steven T DeKosky, Samuel S Wu, Michael Marsiske, Gene E Alexander, Ronald Cohen, Adam J Woods
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 671-672
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Cognitive training has shown promise for improving cognition in older adults. Aging involves a variety of neuroanatomical changes that may affect response to cognitive training. White matter hyperintensities (WMH) are one common age-related brain change, as evidenced by T2-weighted and Fluid Attenuated Inversion Recovery (FLAIR) MRI. WMH are associated with older age, suggestive of cerebral small vessel disease, and reflect decreased white matter integrity. Higher WMH load associates with reduced threshold for clinical expression of cognitive impairment and dementia. The effects of WMH on response to cognitive training interventions are relatively unknown. The current study assessed (a) proximal cognitive training performance following a 3-month randomized control trial and (b) the contribution of baseline whole-brain WMH load, defined as total lesion volume (TLV), on pre-post proximal training change.
Participants and Methods:Sixty-two healthy older adults ages 65-84 completed either adaptive cognitive training (CT; n=31) or educational training control (ET; n=31) interventions. Participants assigned to CT completed 20 hours of attention/processing speed training and 20 hours of working memory training delivered through commercially-available Posit Science BrainHQ. ET participants completed 40 hours of educational videos. All participants also underwent sham or active transcranial direct current stimulation (tDCS) as an adjunctive intervention, although not a variable of interest in the current study. Multimodal MRI scans were acquired during the baseline visit. T1- and T2-weighted FLAIR images were processed using the Lesion Segmentation Tool (LST) for SPM12. The Lesion Prediction Algorithm of LST automatically segmented brain tissue and calculated lesion maps. A lesion threshold of 0.30 was applied to calculate TLV. A log transformation was applied to TLV to normalize the distribution of WMH. Repeated-measures analysis of covariance (RM-ANCOVA) assessed pre/post change in proximal composite (Total Training Composite) and sub-composite (Processing Speed Training Composite, Working Memory Training Composite) measures in the CT group compared to their ET counterparts, controlling for age, sex, years of education and tDCS group. Linear regression assessed the effect of TLV on post-intervention proximal composite and sub-composite, controlling for baseline performance, intervention assignment, age, sex, years of education, multisite scanner differences, estimated total intracranial volume, and binarized cardiovascular disease risk.
Results:RM-ANCOVA revealed two-way group*time interactions such that those assigned cognitive training demonstrated greater improvement on proximal composite (Total Training Composite) and sub-composite (Processing Speed Training Composite, Working Memory Training Composite) measures compared to their ET counterparts. Multiple linear regression showed higher baseline TLV associated with lower pre-post change on Processing Speed Training sub-composite (ß = -0.19, p = 0.04) but not other composite measures.
Conclusions:These findings demonstrate the utility of cognitive training for improving postintervention proximal performance in older adults. Additionally, pre-post proximal processing speed training change appear to be particularly sensitive to white matter hyperintensity load versus working memory training change. These data suggest that TLV may serve as an important factor for consideration when planning processing speed-based cognitive training interventions for remediation of cognitive decline in older adults.
1 Task-Based Functional Connectivity and Network Segregation of the Useful Field of View (UFOV) fMRI task
- Jessica N Kraft, Hanna K Hausman, Cheshire Hardcastle, Alejandro Albizu, Andrew O’Shea, Nicole D Evangelista, Emanuel M Boutzoukas, Emily J Van Etten, Pradyumna K Bharadwaj, Hyun Song, Samantha G Smith, Steven T DeKosky, Georg A Hishaw, Samuel Wu, Michael Marsiske, Ronald Cohen, Eric Porges, Adam J Woods
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 606-607
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Interventions using a cognitive training paradigm called the Useful Field of View (UFOV) task have shown to be efficacious in slowing cognitive decline. However, no studies have looked at the engagement of functional networks during UFOV task completion. The current study aimed to (a) assess if regions activated during the UFOV fMRI task were functionally connected and related to task performance (henceforth called the UFOV network), (b) compare connectivity of the UFOV network to 7 resting-state functional connectivity networks in predicting proximal (UFOV) and near-transfer (Double Decision) performance, and (c) explore the impact of network segregation between higher-order networks and UFOV performance.
Participants and Methods:336 healthy older adults (mean age=71.6) completed the UFOV fMRI task in a Siemens 3T scanner. UFOV fMRI accuracy was calculated as the number of correct responses divided by 56 total trials. Double Decision performance was calculated as the average presentation time of correct responses in log ms, with lower scores equating to better processing speed. Structural and functional MRI images were processed using the default pre-processing pipeline within the CONN toolbox. The Artifact Rejection Toolbox was set at a motion threshold of 0.9mm and participants were excluded if more than 50% of volumes were flagged as outliers. To assess connectivity of regions associated with the UFOV task, we created 10 spherical regions of interest (ROIs) a priori using the WFU PickAtlas in SPM12. These include the bilateral pars triangularis, supplementary motor area, and inferior temporal gyri, as well as the left pars opercularis, left middle occipital gyrus, right precentral gyrus and right superior parietal lobule. We used a weighted ROI-to-ROI connectivity analysis to model task-based within-network functional connectivity of the UFOV network, and its relationship to UFOV accuracy. We then used weighted ROI-to-ROI connectivity analysis to compare the efficacy of the UFOV network versus 7 resting-state networks in predicting UFOV fMRI task performance and Double Decision performance. Finally, we calculated network segregation among higher order resting state networks to assess its relationship with UFOV accuracy. All functional connectivity analyses were corrected at a false discovery threshold (FDR) at p<0.05.
Results:ROI-to-ROI analysis showed significant within-network functional connectivity among the 10 a priori ROIs (UFOV network) during task completion (all pFDR<.05). After controlling for covariates, greater within-network connectivity of the UFOV network associated with better UFOV fMRI performance (pFDR=.008). Regarding the 7 resting-state networks, greater within-network connectivity of the CON (pFDR<.001) and FPCN (pFDR=. 014) were associated with higher accuracy on the UFOV fMRI task. Furthermore, greater within-network connectivity of only the UFOV network associated with performance on the Double Decision task (pFDR=.034). Finally, we assessed the relationship between higher-order network segregation and UFOV accuracy. After controlling for covariates, no significant relationships between network segregation and UFOV performance remained (all p-uncorrected>0.05).
Conclusions:To date, this is the first study to assess task-based functional connectivity during completion of the UFOV task. We observed that coherence within 10 a priori ROIs significantly predicted UFOV performance. Additionally, enhanced within-network connectivity of the UFOV network predicted better performance on the Double Decision task, while conventional resting-state networks did not. These findings provide potential targets to optimize efficacy of UFOV interventions.
78 BVMT-R Learning Ratio Moderates Cognitive Training Gains in Useful Field of View Task in Healthy Older Adults
- Cheshire Hardcastle, Jessica N. Kraft, Hanna K. Hausman, Andrew O’Shea, Alejandro Albizu, Nicole D. Evangelista, Emanuel Boutzoukas, Emily J. Van Etten, Pradyumna K. Bharadwaj, Hyun Song, Samantha G. Smith, Eric Porges, Steven DeKosky, Georg A. Hishaw, Samuel Wu, Michael Marsiske, Ronald Cohen, Gene E. Alexander, Adam J. Woods
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 180-181
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Cognitive training using a visual speed-of-processing task, called the Useful Field of View (UFOV) task, reduced dementia risk and reduced decline in activities of daily living at a 10-year follow-up in older adults. However, there is variability in the level of cognitive gains after cognitive training across studies. One potential explanation for this variability could be moderating factors. Prior studies suggest variables moderating cognitive training gains share features of the training task. Learning trials of the Hopkins Verbal Learning Test-Revised (HVLT-R) and Brief Visuospatial Memory Test-Revised (BVMT-R) recruit similar cognitive abilities and have overlapping neural correlates with the UFOV task and speed-ofprocessing/working memory tasks and therefore could serve as potential moderators. Exploring moderating factors of cognitive training gains may boost the efficacy of interventions, improve rigor in the cognitive training literature, and eventually help provide tailored treatment recommendations. This study explored the association between the HVLT-R and BVMT-R learning and the UFOV task, and assessed the moderation of HVLT-R and BVMT-R learning on UFOV improvement after a 3-month speed-ofprocessing/attention and working memory cognitive training intervention in cognitively healthy older adults.
Participants and Methods:75 healthy older adults (M age = 71.11, SD = 4.61) were recruited as part of a larger clinical trial through the Universities of Florida and Arizona. Participants were randomized into a cognitive training (n=36) or education control (n=39) group and underwent a 40-hour, 12-week intervention. Cognitive training intervention consisted of practicing 4 attention/speed-of-processing (including the UFOV task) and 4 working memory tasks. Education control intervention consisted of watching 40-minute educational videos. The HVLT-R and BVMT-R were administered at the pre-intervention timepoint as part of a larger neurocognitive battery. The learning ratio was calculated as: trial 3 total - trial 1 total/12 - trial 1 total. UFOV performance was measured at pre- and post-intervention time points via the POSIT Brain HQ Double Decision Assessment. Multiple linear regressions predicted baseline Double Decision performance from HVLT-R and BVMT-R learning ratios controlling for study site, age, sex, and education. A repeated measures moderation analysis assessed the moderation of HVLT-R and BVMT-R learning ratio on Double Decision change from pre- to post-intervention for cognitive training and education control groups.
Results:Baseline Double Decision performance significantly associated with BVMT-R learning ratio (β=-.303, p=.008), but not HVLT-R learning ratio (β=-.142, p=.238). BVMT-R learning ratio moderated gains in Double Decision performance (p<.01); for each unit increase in BVMT-R learning ratio, there was a .6173 unit decrease in training gains. The HVLT-R learning ratio did not moderate gains in Double Decision performance (p>.05). There were no significant moderations in the education control group.
Conclusions:Better visuospatial learning was associated with faster Double Decision performance at baseline. Those with poorer visuospatial learning improved most on the Double Decision task after training, suggesting that healthy older adults who perform below expectations may show the greatest training gains. Future cognitive training research studying visual speed-of-processing interventions should account for differing levels of visuospatial learning at baseline, as this could impact the magnitude of training outcomes.
6 Adjunctive Transcranial Direct Current Stimulation and Cognitive Training Alters Default Mode and Frontoparietal Control Network Connectivity in Older Adults
- Hanna K Hausman, Jessica N Kraft, Cheshire Hardcastle, Nicole D Evangelista, Emanuel M Boutzoukas, Andrew O’Shea, Alejandro Albizu, Emily J Van Etten, Pradyumna K Bharadwaj, Hyun Song, Samantha G Smith, Eric S Porges, Georg A Hishaw, Samuel Wu, Steven DeKosky, Gene E Alexander, Michael Marsiske, Ronald A Cohen, Adam J Woods
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 675-676
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Aging is associated with disruptions in functional connectivity within the default mode (DMN), frontoparietal control (FPCN), and cingulo-opercular (CON) resting-state networks. Greater within-network connectivity predicts better cognitive performance in older adults. Therefore, strengthening network connectivity, through targeted intervention strategies, may help prevent age-related cognitive decline or progression to dementia. Small studies have demonstrated synergistic effects of combining transcranial direct current stimulation (tDCS) and cognitive training (CT) on strengthening network connectivity; however, this association has yet to be rigorously tested on a large scale. The current study leverages longitudinal data from the first-ever Phase III clinical trial for tDCS to examine the efficacy of an adjunctive tDCS and CT intervention on modulating network connectivity in older adults.
Participants and Methods:This sample included 209 older adults (mean age = 71.6) from the Augmenting Cognitive Training in Older Adults multisite trial. Participants completed 40 hours of CT over 12 weeks, which included 8 attention, processing speed, and working memory tasks. Participants were randomized into active or sham stimulation groups, and tDCS was administered during CT daily for two weeks then weekly for 10 weeks. For both stimulation groups, two electrodes in saline-soaked 5x7 cm2 sponges were placed at F3 (cathode) and F4 (anode) using the 10-20 measurement system. The active group received 2mA of current for 20 minutes. The sham group received 2mA for 30 seconds, then no current for the remaining 20 minutes.
Participants underwent resting-state fMRI at baseline and post-intervention. CONN toolbox was used to preprocess imaging data and conduct region of interest (ROI-ROI) connectivity analyses. The Artifact Detection Toolbox, using intermediate settings, identified outlier volumes. Two participants were excluded for having greater than 50% of volumes flagged as outliers. ROI-ROI analyses modeled the interaction between tDCS group (active versus sham) and occasion (baseline connectivity versus postintervention connectivity) for the DMN, FPCN, and CON controlling for age, sex, education, site, and adherence.
Results:Compared to sham, the active group demonstrated ROI-ROI increases in functional connectivity within the DMN following intervention (left temporal to right temporal [T(202) = 2.78, pFDR < 0.05] and left temporal to right dorsal medial prefrontal cortex [T(202) = 2.74, pFDR < 0.05]. In contrast, compared to sham, the active group demonstrated ROI-ROI decreases in functional connectivity within the FPCN following intervention (left dorsal prefrontal cortex to left temporal [T(202) = -2.96, pFDR < 0.05] and left dorsal prefrontal cortex to left lateral prefrontal cortex [T(202) = -2.77, pFDR < 0.05]). There were no significant interactions detected for CON regions.
Conclusions:These findings (a) demonstrate the feasibility of modulating network connectivity using tDCS and CT and (b) provide important information regarding the pattern of connectivity changes occurring at these intervention parameters in older adults. Importantly, the active stimulation group showed increases in connectivity within the DMN (a network particularly vulnerable to aging and implicated in Alzheimer’s disease) but decreases in connectivity between left frontal and temporal FPCN regions. Future analyses from this trial will evaluate the association between these changes in connectivity and cognitive performance post-intervention and at a one-year timepoint.
9 Connecting memory and functional brain networks in older adults: a resting state fMRI study
- Jori L Waner, Hanna K Hausman, Jessica N Kraft, Cheshire Hardcastle, Nicole D Evangelista, Andrew O’Shea, Alejandro Albizu, Emanuel M Boutzoukas, Emily J Van Etten, Pradyumna K Bharadwaj, Hyun Song, Samantha G Smith, Steven T DeKosky, Georg A Hishaw, Samuel S Wu, Michael Marsiske, Ronald Cohen, Gene E Alexander, Eric C Porges, Adam J Woods
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 527-528
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Nonpathological aging has been linked to decline in both verbal and visuospatial memory abilities in older adults. Disruptions in resting-state functional connectivity within well-characterized, higherorder cognitive brain networks have also been coupled with poorer memory functioning in healthy older adults and in older adults with dementia. However, there is a paucity of research on the association between higherorder functional connectivity and verbal and visuospatial memory performance in the older adult population. The current study examines the association between resting-state functional connectivity within the cingulo-opercular network (CON), frontoparietal control network (FPCN), and default mode network (DMN) and verbal and visuospatial learning and memory in a large sample of healthy older adults. We hypothesized that greater within-network CON and FPCN functional connectivity would be associated with better immediate verbal and visuospatial memory recall. Additionally, we predicted that within-network DMN functional connectivity would be associated with improvements in delayed verbal and visuospatial memory recall. This study helps to glean insight into whether within-network CON, FPCN, or DMN functional connectivity is associated with verbal and visuospatial memory abilities in later life.
Participants and Methods:330 healthy older adults between 65 and 89 years old (mean age = 71.6 ± 5.2) were recruited at the University of Florida (n = 222) and the University of Arizona (n = 108). Participants underwent resting-state fMRI and completed verbal memory (Hopkins Verbal Learning Test - Revised [HVLT-R]) and visuospatial memory (Brief Visuospatial Memory Test - Revised [BVMT-R]) measures. Immediate (total) and delayed recall scores on the HVLT-R and BVMT-R were calculated using each test manual’s scoring criteria. Learning ratios on the HVLT-R and BVMT-R were quantified by dividing the number of stimuli (verbal or visuospatial) learned between the first and third trials by the number of stimuli not recalled after the first learning trial. CONN Toolbox was used to extract average within-network connectivity values for CON, FPCN, and DMN. Hierarchical regressions were conducted, controlling for sex, race, ethnicity, years of education, number of invalid scans, and scanner site.
Results:Greater CON connectivity was significantly associated with better HVLT-R immediate (total) recall (ß = 0.16, p = 0.01), HVLT-R learning ratio (ß = 0.16, p = 0.01), BVMT-R immediate (total) recall (ß = 0.14, p = 0.02), and BVMT-R delayed recall performance (ß = 0.15, p = 0.01). Greater FPCN connectivity was associated with better BVMT-R learning ratio (ß = 0.13, p = 0.04). HVLT-R delayed recall performance was not associated with connectivity in any network, and DMN connectivity was not significantly related to any measure.
Conclusions:Connectivity within CON demonstrated a robust relationship with different components of memory function as well across verbal and visuospatial domains. In contrast, FPCN only evidenced a relationship with visuospatial learning, and DMN was not significantly associated with memory measures. These data suggest that CON may be a valuable target in longitudinal studies of age-related memory changes, but also a possible target in future non-invasive interventions to attenuate memory decline in older adults.
Complaints about police misconduct have adverse effects for Black civilians
- Patrick W. Kraft, Benjamin J. Newman
-
- Journal:
- Political Science Research and Methods , First View
- Published online by Cambridge University Press:
- 30 October 2023, pp. 1-24
-
- Article
- Export citation
-
Existing literature examines the effectiveness of civilian oversight in reducing police misconduct. However, little-to-no quantitative research explores possible adverse consequences of this accountability mechanism. Utilizing time series analysis of administrative data on aggregate monthly civilian complaints and police behavior in the largest American city, this article offers evidence of racial inequality in police response to civilian complaints. For White civilians, complaint against the police abates subsequent police stops. For Black civilians, complaint is associated with subsequent intensification of police stops. This intensification only follows complaints against White officers, is conditional upon officer knowledge of the complaint, is confined to stops involving greater officer discretion to perform the stop, and is only observed in police precincts with large Black populations.
Prediction of estimated risk for bipolar disorder using machine learning and structural MRI features
- Pavol Mikolas, Michael Marxen, Philipp Riedel, Kyra Bröckel, Julia Martini, Fabian Huth, Christina Berndt, Christoph Vogelbacher, Andreas Jansen, Tilo Kircher, Irina Falkenberg, Martin Lambert, Vivien Kraft, Gregor Leicht, Christoph Mulert, Andreas J. Fallgatter, Thomas Ethofer, Anne Rau, Karolina Leopold, Andreas Bechdolf, Andreas Reif, Silke Matura, Felix Bermpohl, Jana Fiebig, Thomas Stamm, Christoph U. Correll, Georg Juckel, Vera Flasbeck, Philipp Ritter, Michael Bauer, Andrea Pfennig
-
- Journal:
- Psychological Medicine / Volume 54 / Issue 2 / January 2024
- Published online by Cambridge University Press:
- 22 May 2023, pp. 278-288
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Background
Individuals with bipolar disorder are commonly correctly diagnosed a decade after symptom onset. Machine learning techniques may aid in early recognition and reduce the disease burden. As both individuals at risk and those with a manifest disease display structural brain markers, structural magnetic resonance imaging may provide relevant classification features.
MethodsFollowing a pre-registered protocol, we trained linear support vector machine (SVM) to classify individuals according to their estimated risk for bipolar disorder using regional cortical thickness of help-seeking individuals from seven study sites (N = 276). We estimated the risk using three state-of-the-art assessment instruments (BPSS-P, BARS, EPIbipolar).
ResultsFor BPSS-P, SVM achieved a fair performance of Cohen's κ of 0.235 (95% CI 0.11–0.361) and a balanced accuracy of 63.1% (95% CI 55.9–70.3) in the 10-fold cross-validation. In the leave-one-site-out cross-validation, the model performed with a Cohen's κ of 0.128 (95% CI −0.069 to 0.325) and a balanced accuracy of 56.2% (95% CI 44.6–67.8). BARS and EPIbipolar could not be predicted. In post hoc analyses, regional surface area, subcortical volumes as well as hyperparameter optimization did not improve the performance.
ConclusionsIndividuals at risk for bipolar disorder, as assessed by BPSS-P, display brain structural alterations that can be detected using machine learning. The achieved performance is comparable to previous studies which attempted to classify patients with manifest disease and healthy controls. Unlike previous studies of bipolar risk, our multicenter design permitted a leave-one-site-out cross-validation. Whole-brain cortical thickness seems to be superior to other structural brain features.
The effect of attention bias modification on depressive symptoms in a comorbid sample: a randomized controlled trial
- Ragnhild Bø, Brage Kraft, Mads Lund Pedersen, Jutta Joormann, Rune Jonassen, Kåre Osnes, Catherine J. Harmer, Nils Inge Landrø
-
- Journal:
- Psychological Medicine / Volume 53 / Issue 13 / October 2023
- Published online by Cambridge University Press:
- 09 January 2023, pp. 6389-6396
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Background
Studies investigating the long-term effect of attention bias modification (ABM) in clinical samples are lacking. This study investigates the 6-months follow-up effect of ABM on depressive symptoms in participant with major depressive disorder with and without comorbid disorders.
MethodsWe conducted a double-blind randomized sham-controlled trial in 101 participants between 19 November 2019, and 17 August 2021. Follow-up ended 3 April 2022. Participants were allocated to ABM or sham condition twice daily for 14 consecutive days. Primary outcomes were the total score on the Beck Depression Inventory-II (BDI-II) at 6 months, mean Brief State Rumination Inventory (BSRI) score post-treatment and reduction in BSRI post-treatment. Secondary outcome was change in attentional bias (AB). The trial was preregistered in ClinicalTrials.gov (#NCT 04137367).
ResultsA total of 118 patients aged 18–65 years were assessed for eligibility, and 101 were randomized and subjected to intention-to-treat analyses. At 6 months, ABM had no effect on depression and anxiety compared to a sham condition. While rumination decreased during the intervention, there was no effect of condition on rumination and AB. Predictor analysis did not reveal differences between participants with ongoing major depressive episode or comorbid anxiety.
ConclusionCompared to sham training, there was no effect of ABM on depressive symptoms at 6-months follow-up. Since the intervention failed at modifying AB, it is unclear whether changes in AB are related to long-term outcomes.
Mediators of quality of life change in people with severe psychotic disorders treated in integrated care: ACCESS II study
- Romy Schröter, Martin Lambert, Anja Rohenkohl, Vivien Kraft, Friederike Rühl, Daniel Luedecke, Jürgen Gallinat, Anne Karow, Stefanie J. Schmidt
-
- Journal:
- European Psychiatry / Volume 66 / Issue 1 / 2023
- Published online by Cambridge University Press:
- 04 November 2022, e1
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Background
Patients with severe psychotic disorders exhibit a severely reduced quality of life (QoL) at all stages of the disease. Integrated care often led to an improvement in QoL. However, the specific mediators of QoL change are not yet well understood.
MethodsThe ACCESS II study is a prospective, long-term study investigating the effectiveness of an integrated care program for people with severe psychotic disorders (IC-TACT) that includes Therapeutic Assertive Community Treatment within a care network of in- and outpatient services at the University Medical Center Hamburg-Eppendorf, Germany. We examined longitudinal associations between QoL and the hypothesized mediators of change (i.e., negative symptoms, depression, and anxiety), using cross-lagged panel models.
ResultsThe sample includes 418 severely ill patients treated in IC-TACT for at least 1 year. QoL increased, whereas symptom severity decreased significantly from baseline to 6-month follow-up (p-values ≤ 0.001), and remained stable until 12-month follow-up. QoL and symptom severity demonstrated significant auto-correlated effects and significant cross-lagged effects from QoL at baseline to negative symptoms (6 months, β = −0.20, p < 0.001) to QoL (12 months, β = −0.19, p < 0.01) resulting in a significant indirect, mediated effect. Additionally, negative symptoms after 6 months had a significant effect on the severity of depression after 12 months (β = 0.13, p < 0.05).
ConclusionsNegative symptoms appear to represent an important mechanism of change in IC-TACT indicating that improvement of QoL could potentially be achieved through optimized intervention on negative symptoms. Moreover, this may lead to a reduction in the severity of depression after 12 months.
The relationship between the recognition of specific basic emotions and negative symptom domains in patients with schizophrenia spectrum disorders
- M. Zierhut, K. Boege, N. Bergmann, I. Hahne, A. Braun, J. Kraft, T.M.T. Ta, S. Ripke, M. Bajbouj, E. Hahn
-
- Journal:
- European Psychiatry / Volume 65 / Issue S1 / June 2022
- Published online by Cambridge University Press:
- 01 September 2022, pp. S107-S108
-
- Article
-
- You have access Access
- Open access
- Export citation
-
Introduction
Current research suggests emotion recognition to be significantly impaired in individuals with schizophrenia spectrum disorders (SSD), whereby negative symptoms are theorised to play a crucial role. Emotion recognition deficits are assumed to be predictors of transition from clinical high risk to schizophrenia. So far, little attention has been given hereby to the subdomains of negative symptoms and recognizing the individual basic emotions.
ObjectivesOur study aimed to explore the relationship between the recognition of the basic emotions and each negative symptom domain.
Methods66 patients with a SSD diagnosis were recruited at the Charité – Universitätsmedizin Berlin. Correlational and regression analyses to control for the covariates (age, education, sex) were conducted between the recognition of the six basic emotions (anger, disgust, fear, happiness, sadness, surprise) using the Emotion Recognition Task of the Cambridge Neuropsychological Test Automated Battery (CANTAB) and the seven different subdomains of negative symptoms of the Positive and Negative Syndrome Scale (PANSS).
Resultsrevealed significantly negative correlations of blunted affect with the recognition of happiness, fear, and disgust. Difficulties in abstract thinking, also correlated positively with the recognition of fear. Additionally, we found a significant positive correlation between stereotyped thinking and difficulties in abstract thinking with the response latency in emotion recognition.
ConclusionsIndividuals with SSD and domains of negative symptoms showed specific impairments in recognizing the representation of basic emotions. A longitudinal design to make causality statements would be useful for future research. Moreover, emotion recognition should be considered for early detection and individualized treatment.
DisclosureNo significant relationships.
Forest terrains influence walking kinematics among indigenous Tsimane of the Bolivian Amazon
- Nicholas B. Holowka, Thomas S. Kraft, Ian J. Wallace, Michael Gurven, Vivek V. Venkataraman
-
- Journal:
- Evolutionary Human Sciences / Volume 4 / 2022
- Published online by Cambridge University Press:
- 22 April 2022, e19
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Laboratory-based studies indicate that a major evolutionary advantage of bipedalism is enabling humans to walk with relatively low energy expenditure. However, such studies typically record subjects walking on even surfaces or treadmills that do not represent the irregular terrains our species encounters in natural environments. To date, few studies have quantified walking kinematics on natural terrains. Here we used high-speed video to record marker-based kinematics of 21 individuals from a Tsimane forager–horticulturalist community in the Bolivian Amazon walking on three different terrains: a dirt field, a forest trail and an unbroken forest transect. Compared with the field, in the unbroken forest participants contacted the ground with more protracted legs and flatter foot postures, had more inclined trunks, more flexed hips and knees, and raised their feet higher during leg swing. In contrast, kinematics were generally similar between trail and field walking. These results provide preliminary support for the idea that irregular natural surfaces like those in forests cause humans to alter their walking kinematics, such that travel in these environments could be more energetically expensive than would be assumed from laboratory-based data. These findings have important implications for the evolutionary energetics of human foraging in environments with challenging terrains.
Cultural variation in running techniques among non-industrial societies
- Ian J. Wallace, Thomas S. Kraft, Vivek V. Venkataraman, Helen E. Davis, Nicholas B. Holowka, Alexandra R. Harris, Daniel E. Lieberman, Michael Gurven
-
- Journal:
- Evolutionary Human Sciences / Volume 4 / 2022
- Published online by Cambridge University Press:
- 11 April 2022, e14
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Research among non-industrial societies suggests that body kinematics adopted during running vary between groups according to the cultural importance of running. Among groups in which running is common and an important part of cultural identity, runners tend to adopt what exercise scientists and coaches consider to be good technique for avoiding injury and maximising performance. In contrast, among groups in which running is not particularly culturally important, people tend to adopt suboptimal technique. This paper begins by describing key elements of good running technique, including landing with a forefoot or midfoot strike pattern and leg oriented roughly vertically. Next, we review evidence from non-industrial societies that cultural attitudes about running associate with variation in running techniques. Then, we present new data from Tsimane forager–horticulturalists in Bolivia. Our findings suggest that running is neither a common activity among the Tsimane nor is it considered an important part of cultural identity. We also demonstrate that when Tsimane do run, they tend to use suboptimal technique, specifically landing with a rearfoot strike pattern and leg protracted ahead of the knee (called overstriding). Finally, we discuss processes by which culture might influence variation in running techniques among non-industrial societies, including self-optimisation and social learning.
Examining access to care in clinical genomic research and medicine: Experiences from the CSER Consortium
- Amanda M. Gutierrez, Jill O. Robinson, Simon M. Outram, Hadley S. Smith, Stephanie A. Kraft, Katherine E. Donohue, Barbara B. Biesecker, Kyle B. Brothers, Flavia Chen, Benyam Hailu, Lucia A. Hindorff, Hannah Hoban, Rebecca L. Hsu, Sara J. Knight, Barbara A. Koenig, Katie L. Lewis, Kristen Hassmiller Lich, Julianne M. O’Daniel, Sonia Okuyama, Gail E. Tomlinson, Margaret Waltz, Benjamin S. Wilfond, Sara L. Ackerman, Mary A. Majumder
-
- Journal:
- Journal of Clinical and Translational Science / Volume 5 / Issue 1 / 2021
- Published online by Cambridge University Press:
- 14 September 2021, e193
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Introduction:
Ensuring equitable access to health care is a widely agreed-upon goal in medicine, yet access to care is a multidimensional concept that is difficult to measure. Although frameworks exist to evaluate access to care generally, the concept of “access to genomic medicine” is largely unexplored and a clear framework for studying and addressing major dimensions is lacking.
Methods:Comprised of seven clinical genomic research projects, the Clinical Sequencing Evidence-Generating Research consortium (CSER) presented opportunities to examine access to genomic medicine across diverse contexts. CSER emphasized engaging historically underrepresented and/or underserved populations. We used descriptive analysis of CSER participant survey data and qualitative case studies to explore anticipated and encountered access barriers and interventions to address them.
Results:CSER’s enrolled population was largely lower income and racially and ethnically diverse, with many Spanish-preferring individuals. In surveys, less than a fifth (18.7%) of participants reported experiencing barriers to care. However, CSER project case studies revealed a more nuanced picture that highlighted the blurred boundary between access to genomic research and clinical care. Drawing on insights from CSER, we build on an existing framework to characterize the concept and dimensions of access to genomic medicine along with associated measures and improvement strategies.
Conclusions:Our findings support adopting a broad conceptualization of access to care encompassing multiple dimensions, using mixed methods to study access issues, and investing in innovative improvement strategies. This conceptualization may inform clinical translation of other cutting-edge technologies and contribute to the promotion of equitable, effective, and efficient access to genomic medicine.
Healthcare worker mental models of patient care tasks in the context of infection prevention and control
- Joel M. Mumma, Jessica R. Howard-Anderson, Jill S. Morgan, Kevin Schink, Marisa J. Wheatley, Colleen S. Kraft, Morgan A. Lane, Noah H. Kaufman, Oluwateniola Ayeni, Erik A. Brownsword, Jesse T. Jacob
-
- Journal:
- Infection Control & Hospital Epidemiology / Volume 43 / Issue 9 / September 2022
- Published online by Cambridge University Press:
- 10 September 2021, pp. 1123-1128
- Print publication:
- September 2022
-
- Article
- Export citation
-
Objective:
Understanding the cognitive determinants of healthcare worker (HCW) behavior is important for improving the use of infection prevention and control (IPC) practices. Given a patient requiring only standard precautions, we examined the dimensions along which different populations of HCWs cognitively organize patient care tasks (ie, their mental models).
Design:HCWs read a description of a patient and then rated the similarities of 25 patient care tasks from an infection prevention perspective. Using multidimensional scaling, we identified the dimensions (ie, characteristics of tasks) underlying these ratings and the salience of each dimension to HCWs.
Setting:Adult inpatient hospitals across an academic hospital network.
Participants:In total, 40 HCWs, comprising infection preventionists and nurses from intensive care units, emergency departments, and medical-surgical floors rated the similarity of tasks. To identify the meaning of each dimension, another 6 nurses rated each task in terms of specific characteristics of tasks.
Results:Each HCW population perceived patient care tasks to vary along 3 common dimensions; most salient was the perceived magnitude of infection risk to the patient in a task, followed by the perceived dirtiness and risk of HCW exposure to body fluids, and lastly, the relative importance of a task for preventing versus controlling an infection in a patient.
Conclusions:For a patient requiring only standard precautions, different populations of HCWs have similar mental models of how various patient care tasks relate to IPC. Techniques for eliciting mental models open new avenues for understanding and ultimately modifying the cognitive determinants of IPC behaviors.
Introducing a psychiatric genetic cohort of schizophrenia patients and controls from Vietnam
- A. Braun, T.V. Nguyen, S. Ripke, P.V. Nguyen, J. Kraft, H.T. Nguyen, T.C. Le, G. Panagiotaropoulou, I.M. Hahne, K. Böge, E. Hahn, T.M.T. Ta
-
- Journal:
- European Psychiatry / Volume 64 / Issue S1 / April 2021
- Published online by Cambridge University Press:
- 13 August 2021, pp. S802-S803
-
- Article
-
- You have access Access
- Open access
- Export citation
-
Introduction
Genome-wide association studies (GWAS) have successfully revealed genetic risk variants for schizophrenia (SCZ). However, the vast majority of GWAS largely comprise European samples. As a result, the derived polygenic risk scores (PRS) show decreased predictive power when applied to non-European populations.
ObjectivesA long-term scientific cooperation between the Charité Universitätsmedizin Berlin and the Hanoi Medical University aims to address this limitation by recruiting a large genetic cohort of comprehensively phenotyped schizophrenia patients and controls in Vietnam.
MethodsA pilot study was conducted at the Department of Psychiatry of the Medical University Hanoi in 2017. Data collection encompassed i) genome-wide SNP genotyping of 200 schizophrenia patients and 200 control subjects ii) structured interviews to assess symptom severity (PANSS), iii) clinical parameters (e.g. duration of illness, medication) and demography.
ResultsSCZ-PRS of the pilot sample (N=400) were generated using different training data sets: i) European, ii) East-Asian and iii) mixed GWAS summary statistics from the Psychiatric Genomics Consortium’s latest discovery sample. Most variance explained was observed using a mixed discovery sample (R2liability=0.053, p=3.11*10-8, Pd <0.5), followed by PRS based on the East-Asian summary statistics (R2liability=0.0503, p=6.78*10-8, Pd <1) and the European sample (R2liability=0.0363, p = 4.26*10-6, Pd <0.01).
ConclusionsWith this pilot project we established an efficient recruitment, genotyping and data analysis pipeline. Our results corroborate previous findings indicating that transferability of PRS across populations depends on the ancestral composition of the initial discovery dataset. We therefore aim to expand data collection efforts in the future in order to improve risk prediction across diverse populations.
DisclosureNo significant relationships.
Hypothesis testing with error correction models
- Patrick W. Kraft, Ellen M. Key, Matthew J. Lebo
-
- Journal:
- Political Science Research and Methods / Volume 10 / Issue 4 / October 2022
- Published online by Cambridge University Press:
- 21 July 2021, pp. 870-878
-
- Article
- Export citation
-
Grant and Lebo (2016) and Keele et al. (2016) clarify the conditions under which the popular general error correction model (GECM) can be used and interpreted easily: In a bivariate GECM the data must be integrated in order to rely on the error correction coefficient, $\alpha _1^\ast$, to test cointegration and measure the rate of error correction between a single exogenous x and a dependent variable, y. Here we demonstrate that even if the data are all integrated, the test on $\alpha _1^\ast$ is misunderstood when there is more than a single independent variable. The null hypothesis is that there is no cointegration between y and any x but the correct alternative hypothesis is that y is cointegrated with at least one—but not necessarily more than one—of the x's. A significant $\alpha _1^\ast$ can occur when some I(1) regressors are not cointegrated and the equation is not balanced. Thus, the correct limiting distributions of the right-hand-side long-run coefficients may be unknown. We use simulations to demonstrate the problem and then discuss implications for applied examples.
Clinical characteristics of hospitalized patients with false-negative severe acute respiratory coronavirus virus 2 (SARS-CoV-2) test results
- Part of
- Erica L. MacKenzie, Dariusz A. Hareza, Maggie W. Collison, Anna E. Czapar, Antigone K. Kraft, Bennett J. Waxse, Eleanor E. Friedman, Jessica P. Ridgway
-
- Journal:
- Infection Control & Hospital Epidemiology / Volume 43 / Issue 4 / April 2022
- Published online by Cambridge University Press:
- 19 April 2021, pp. 467-473
- Print publication:
- April 2022
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Objective:
To determine clinical characteristics associated with false-negative severe acute respiratory coronavirus virus 2 (SARS-CoV-2) test results to help inform coronavirus disease 2019 (COVID-19) testing practices in the inpatient setting.
Design:A retrospective observational cohort study.
Setting:Tertiary-care facility.
Patients:All patients 2 years of age and older tested for SARS-CoV-2 between March 14, 2020, and April 30, 2020, who had at least 2 SARS-CoV-2 reverse-transcriptase polymerase chain reaction tests within 7 days.
Methods:The primary outcome measure was a false-negative testing episode, which we defined as an initial negative test followed by a positive test within the subsequent 7 days. Data collected included symptoms, demographics, comorbidities, vital signs, labs, and imaging studies. Logistic regression was used to model associations between clinical variables and false-negative SARS-CoV-2 test results.
Results:Of the 1,009 SARS-CoV-2 test results included in the analysis, 4.0% were false-negative results. In multivariable regression analysis, compared with true-negative test results, false-negative test results were associated with anosmia or ageusia (adjusted odds ratio [aOR], 8.4; 95% confidence interval [CI], 1.4–50.5; P = .02), having had a COVID-19–positive contact (aOR, 10.5; 95% CI, 4.3–25.4; P < .0001), and having an elevated lactate dehydrogenase level (aOR, 3.3; 95% CI, 1.2–9.3; P = .03). Demographics, symptom duration, other laboratory values, and abnormal chest imaging were not significantly associated with false-negative test results in our multivariable analysis.
Conclusions:Clinical features can help predict which patients are more likely to have false-negative SARS-CoV-2 test results.
Serosurvey on healthcare personnel caring for patients with Ebola virus disease and Lassa virus in the United States
- Colleen S. Kraft, Aneesh K. Mehta, Jay B. Varkey, G. Marshall Lyon III, Sharon Vanairsdale, Sonia Bell, Eileen M. Burd, Mary Elizabeth Sexton, Leslie Anne Cassidy, Patricia Olinger, Kalpana Rengarajan, Vanessa N. Raabe, Emily Davis, Scott Henderson, Paula DesRoches, Yongxian Xu, Mark J. Mulligan, Bruce S. Ribner
-
- Journal:
- Infection Control & Hospital Epidemiology / Volume 41 / Issue 4 / April 2020
- Published online by Cambridge University Press:
- 20 January 2020, pp. 385-390
- Print publication:
- April 2020
-
- Article
- Export citation
-
Objective:
Healthcare personnel (HCP) were recruited to provide serum samples, which were tested for antibodies against Ebola or Lassa virus to evaluate for asymptomatic seroconversion.
Setting:From 2014 to 2016, 4 patients with Ebola virus disease (EVD) and 1 patient with Lassa fever (LF) were treated in the Serious Communicable Diseases Unit (SCDU) at Emory University Hospital. Strict infection control and clinical biosafety practices were implemented to prevent nosocomial transmission of EVD or LF to HCP.
Participants:All personnel who entered the SCDU who were required to measure their temperatures and complete a symptom questionnaire twice daily were eligible.
Results:No employee developed symptomatic EVD or LF. EVD and LF antibody studies were performed on sera samples from 42 HCP. The 6 participants who had received investigational vaccination with a chimpanzee adenovirus type 3 vectored Ebola glycoprotein vaccine had high antibody titers to Ebola glycoprotein, but none had a response to Ebola nucleoprotein or VP40, or a response to LF antigens.
Conclusions:Patients infected with filoviruses and arenaviruses can be managed successfully without causing occupation-related symptomatic or asymptomatic infections. Meticulous attention to infection control and clinical biosafety practices by highly motivated, trained staff is critical to the safe care of patients with an infection from a special pathogen.
Personalized risk prediction of postoperative cognitive impairment – rationale for the EU-funded BioCog project
- G. Winterer, A. Fournier, O. Bender, D. Boraschi, F. Borchers, T.B. Dschietzig, I. Feinkohl, P. Fletcher, J. Gallinat, D. Hadzidiakos, J.D. Haynes, F. Heppner, S. Hetzer, J. Hendrikse, B. Ittermann, I.M.J. Kant, A. Kraft, A. Krannich, R. Krause, S. Kühn, G. Lachmann, S.J.T. van Montfort, A. Müller, P. Nürnberg, K. Ofosu, M. Pietsch, T. Pischon, J. Preller, E. Renzulli, K. Scheurer, R. Schneider, A.J.C. Slooter, C. Spies, E. Stamatakis, H.D. Volk, S. Weber, A. Wolf, F. Yürek, N. Zacharias, BioCog consortium
-
- Journal:
- European Psychiatry / Volume 50 / April 2018
- Published online by Cambridge University Press:
- 01 January 2020, pp. 34-39
-
- Article
-
- You have access Access
- HTML
- Export citation
-
Postoperative cognitive impairment is among the most common medical complications associated with surgical interventions – particularly in elderly patients. In our aging society, it is an urgent medical need to determine preoperative individual risk prediction to allow more accurate cost–benefit decisions prior to elective surgeries. So far, risk prediction is mainly based on clinical parameters. However, these parameters only give a rough estimate of the individual risk. At present, there are no molecular or neuroimaging biomarkers available to improve risk prediction and little is known about the etiology and pathophysiology of this clinical condition. In this short review, we summarize the current state of knowledge and briefly present the recently started BioCog project (Biomarker Development for Postoperative Cognitive Impairment in the Elderly), which is funded by the European Union. It is the goal of this research and development (R&D) project, which involves academic and industry partners throughout Europe, to deliver a multivariate algorithm based on clinical assessments as well as molecular and neuroimaging biomarkers to overcome the currently unsatisfying situation.