We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Multicenter clinical trials are essential for evaluating interventions but often face significant challenges in study design, site coordination, participant recruitment, and regulatory compliance. To address these issues, the National Institutes of Health’s National Center for Advancing Translational Sciences established the Trial Innovation Network (TIN). The TIN offers a scientific consultation process, providing access to clinical trial and disease experts who provide input and recommendations throughout the trial’s duration, at no cost to investigators. This approach aims to improve trial design, accelerate implementation, foster interdisciplinary teamwork, and spur innovations that enhance multicenter trial quality and efficiency. The TIN leverages resources of the Clinical and Translational Science Awards (CTSA) program, complementing local capabilities at the investigator’s institution. The Initial Consultation process focuses on the study’s scientific premise, design, site development, recruitment and retention strategies, funding feasibility, and other support areas. As of 6/1/2024, the TIN has provided 431 Initial Consultations to increase efficiency and accelerate trial implementation by delivering customized support and tailored recommendations. Across a range of clinical trials, the TIN has developed standardized, streamlined, and adaptable processes. We describe these processes, provide operational metrics, and include a set of lessons learned for consideration by other trial support and innovation networks.
To examine the association of posttraumatic headache (PTH) type with postconcussive symptoms (PCS), pain intensity, and fluid cognitive function across recovery after pediatric concussion.
Methods:
This prospective, longitudinal study recruited children (aged 8–16.99 years) within 24 hours of sustaining a concussion or mild orthopedic injury (OI) from two pediatric hospital emergency departments. Based on parent-proxy ratings of pre- and postinjury headache, children were classified as concussion with no PTH (n = 18), new PTH (n = 43), worse PTH (n = 58), or non-worsening chronic PTH (n = 19), and children with OI with no PTH (n = 58). Children and parents rated PCS and children rated pain intensity weekly up to 6 months. Children completed computerized testing of fluid cognition 10 days, 3 months, and 6- months postinjury. Mixed effects models compared groups across time on PCS, pain intensity, and cognition, controlling for preinjury scores and covariates.
Results:
Group differences in PCS decreased over time. Cognitive and somatic PCS were higher in new, chronic, and worse PTH relative to no PTH (up to 8 weeks postinjury; d = 0.34 to 0.87 when significant) and OI (up to 5 weeks postinjury; d = 0.30 to 1.28 when significant). Pain intensity did not differ by group but declined with time postinjury. Fluid cognition was lower across time in chronic PTH versus no PTH (d = −0.76) and OI (d = −0.61) and in new PTH versus no PTH (d = −0.51).
Conclusions:
Onset of PTH was associated with worse PCS up to 8 weeks after pediatric concussion. Chronic PTH and new PTH were associated with moderately poorer fluid cognitive functioning up to 6 months postinjury. Pain declined over time regardless of PTH type.
Interviews with 22 home-based primary care (HBPC) clinicians revealed that infectious disease physicians and clinical pharmacists facilitate infection management and antibiotic selection, respectively, and that local initiatives within programs support antibiotic prescribing decisions. Interventions that facilitate specialist engagement and tailored approaches that address the unique challenges of HBPC are needed.
To characterise the association between risk of poor glycaemic control and self-reported and area-level food insecurity among adult patients with type 2 diabetes.
Design:
We performed a retrospective, observational analysis of cross-sectional data routinely collected within a health system. Logistic regressions estimated the association between glycaemic control and the dual effect of self-reported and area-level measures of food insecurity.
Setting:
The health system included a network of ambulatory primary and speciality care sites and hospitals in Bronx County, NY.
Participants:
Patients diagnosed with type 2 diabetes who completed a health-related social need (HRSN) assessment between April 2018 and December 2019.
Results:
5500 patients with type 2 diabetes were assessed for HRSN with 7·1 % reporting an unmet food need. Patients with self-reported food needs demonstrated higher odds of having poor glycaemic control compared with those without food needs (adjusted OR (aOR): 1·59, 95 % CI: 1·26, 2·00). However, there was no conclusive evidence that area-level food insecurity alone was a significant predictor of glycaemic control (aOR: 1·15, 95 % CI: 0·96, 1·39). Patients with self-reported food needs residing in food-secure (aOR: 1·83, 95 % CI: 1·22, 2·74) and food-insecure (aOR: 1·72, 95 % CI: 1·25, 2·37) areas showed higher odds of poor glycaemic control than those without self-reported food needs residing in food-secure areas.
Conclusions:
These findings highlight the importance of utilising patient- and area-level social needs data to identify individuals for targeted interventions with increased risk of adverse health outcomes.
The figure of Anthony Comstock may seem like an odd historical relic: a repressed, puritanical, anti-sex reformer from a bygone past. And yet, because his namesake act has been revived as a potential strategy for limiting access to reproductive healthcare, Comstock is no joke. Today, some Americans see the Comstock Act, passed by Congress in 1873, as a pathway to banning abortion and other reproductive care, effectively jettisoning any need for new Supreme Court abortion rulings or congressional legislation. As scholars of the Gilded Age and Progressive Era, we are uniquely situated to intervene in this dialogue and ensure that contemporary conversations are grounded in historical context. We present this forum not as an exhaustive account of the Comstock Act and its architect, but as aopportunity to highlight the context in which this law, which holds so much potential relevance for our present, was created, enacted, enforced, and challenged. We hope this forum will stimulate further scholarly and public conversations around the nation’s long history of regulating reproductive rights and how that history became entangled with other social anxieties.
It is best to think of Anthony Comstock’s campaign against vice as a response to Reconstruction that afflicted the nation long after that period was over. Comstock’s rise in the 1870s was not organic; it was backed by wealthy patrons engaged in intense political fighting over issues such as racial equality, taxation, and democracy. And although Comstock began by arresting vendors of so-called obscene goods, he soon expanded his portfolio, pursuing folks of every race and gender engaged in erotic, profane, or blasphemous correspondence. Interfering in personal conversations proved controversial, resulting in attempts by courts and postmasters to restrain Comstock’s authority, but he, nevertheless, prosecuted countless letter writers. The law that bore his name resulted in federal involvement in private correspondence well into the twentieth century.
Challenges to communication between families and care providers of paediatric patients in intensive care units (ICU) include variability of communication preferences, mismatched goals of care, and difficulties carrying forward family preferences from provider to provider. Our objectives were to develop and test an assessment tool that queries parents of children requiring cardiac intensive care about their communication preferences and to determine if this tool facilitates patient-centred care and improves families’ ICU experience.
Design:
In this quality improvement initiative, a novel tool was developed, the Parental Communication Assessment (PCA), which asked parents with children hospitalised in the cardiac ICU about their communication preferences. Participants were prospectively randomised to the intervention group, which received the PCA, or to standard care. All participants completed a follow-up survey evaluating satisfaction with communication.
Main Results:
One hundred thirteen participants enrolled and 56 were randomised to the intervention group. Participants who received the PCA preferred detail-oriented communication over big picture. Most parents understood the daily discussions on rounds (64%) and felt comfortable expressing concerns (68%). Eighty-six percent reported the PCA was worthwhile. Parents were generally satisfied with communication. However, an important proportion felt unprepared for difficult decisions or setbacks, inadequately included or supported in decision-making, and that they lacked control over their child’s care. There were no significant differences between the intervention and control groups in their communication satisfaction results.
Conclusions:
Parents with children hospitalised in the paediatric ICU demonstrated diverse communication preferences. Most participants felt overall satisfied with communication, but individualising communication with patients’ families according to their preferences may improve their experience.
Cognitive training is a non-pharmacological intervention aimed at improving cognitive function across a single or multiple domains. Although the underlying mechanisms of cognitive training and transfer effects are not well-characterized, cognitive training has been thought to facilitate neural plasticity to enhance cognitive performance. Indeed, the Scaffolding Theory of Aging and Cognition (STAC) proposes that cognitive training may enhance the ability to engage in compensatory scaffolding to meet task demands and maintain cognitive performance. We therefore evaluated the effects of cognitive training on working memory performance in older adults without dementia. This study will help begin to elucidate non-pharmacological intervention effects on compensatory scaffolding in older adults.
Participants and Methods:
48 participants were recruited for a Phase III randomized clinical trial (Augmenting Cognitive Training in Older Adults [ACT]; NIH R01AG054077) conducted at the University of Florida and University of Arizona. Participants across sites were randomly assigned to complete cognitive training (n=25) or an education training control condition (n=23). Cognitive training and the education training control condition were each completed during 60 sessions over 12 weeks for 40 hours total. The education training control condition involved viewing educational videos produced by the National Geographic Channel. Cognitive training was completed using the Posit Science Brain HQ training program, which included 8 cognitive training paradigms targeting attention/processing speed and working memory. All participants also completed demographic questionnaires, cognitive testing, and an fMRI 2-back task at baseline and at 12-weeks following cognitive training.
Results:
Repeated measures analysis of covariance (ANCOVA), adjusted for training adherence, transcranial direct current stimulation (tDCS) condition, age, sex, years of education, and Wechsler Test of Adult Reading (WTAR) raw score, revealed a significant 2-back by training group interaction (F[1,40]=6.201, p=.017, η2=.134). Examination of simple main effects revealed baseline differences in 2-back performance (F[1,40]=.568, p=.455, η2=.014). After controlling for baseline performance, training group differences in 2-back performance was no longer statistically significant (F[1,40]=1.382, p=.247, η2=.034).
Conclusions:
After adjusting for baseline performance differences, there were no significant training group differences in 2-back performance, suggesting that the randomization was not sufficient to ensure adequate distribution of participants across groups. Results may indicate that cognitive training alone is not sufficient for significant improvement in working memory performance on a near transfer task. Additional improvement may occur with the next phase of this clinical trial, such that tDCS augments the effects of cognitive training and results in enhanced compensatory scaffolding even within this high performing cohort. Limitations of the study include a highly educated sample with higher literacy levels and the small sample size was not powered for transfer effects analysis. Future analyses will include evaluation of the combined intervention effects of a cognitive training and tDCS on nback performance in a larger sample of older adults without dementia.
Cognitive training has shown promise for improving cognition in older adults. Aging involves a variety of neuroanatomical changes that may affect response to cognitive training. White matter hyperintensities (WMH) are one common age-related brain change, as evidenced by T2-weighted and Fluid Attenuated Inversion Recovery (FLAIR) MRI. WMH are associated with older age, suggestive of cerebral small vessel disease, and reflect decreased white matter integrity. Higher WMH load associates with reduced threshold for clinical expression of cognitive impairment and dementia. The effects of WMH on response to cognitive training interventions are relatively unknown. The current study assessed (a) proximal cognitive training performance following a 3-month randomized control trial and (b) the contribution of baseline whole-brain WMH load, defined as total lesion volume (TLV), on pre-post proximal training change.
Participants and Methods:
Sixty-two healthy older adults ages 65-84 completed either adaptive cognitive training (CT; n=31) or educational training control (ET; n=31) interventions. Participants assigned to CT completed 20 hours of attention/processing speed training and 20 hours of working memory training delivered through commercially-available Posit Science BrainHQ. ET participants completed 40 hours of educational videos. All participants also underwent sham or active transcranial direct current stimulation (tDCS) as an adjunctive intervention, although not a variable of interest in the current study. Multimodal MRI scans were acquired during the baseline visit. T1- and T2-weighted FLAIR images were processed using the Lesion Segmentation Tool (LST) for SPM12. The Lesion Prediction Algorithm of LST automatically segmented brain tissue and calculated lesion maps. A lesion threshold of 0.30 was applied to calculate TLV. A log transformation was applied to TLV to normalize the distribution of WMH. Repeated-measures analysis of covariance (RM-ANCOVA) assessed pre/post change in proximal composite (Total Training Composite) and sub-composite (Processing Speed Training Composite, Working Memory Training Composite) measures in the CT group compared to their ET counterparts, controlling for age, sex, years of education and tDCS group. Linear regression assessed the effect of TLV on post-intervention proximal composite and sub-composite, controlling for baseline performance, intervention assignment, age, sex, years of education, multisite scanner differences, estimated total intracranial volume, and binarized cardiovascular disease risk.
Results:
RM-ANCOVA revealed two-way group*time interactions such that those assigned cognitive training demonstrated greater improvement on proximal composite (Total Training Composite) and sub-composite (Processing Speed Training Composite, Working Memory Training Composite) measures compared to their ET counterparts. Multiple linear regression showed higher baseline TLV associated with lower pre-post change on Processing Speed Training sub-composite (ß = -0.19, p = 0.04) but not other composite measures.
Conclusions:
These findings demonstrate the utility of cognitive training for improving postintervention proximal performance in older adults. Additionally, pre-post proximal processing speed training change appear to be particularly sensitive to white matter hyperintensity load versus working memory training change. These data suggest that TLV may serve as an important factor for consideration when planning processing speed-based cognitive training interventions for remediation of cognitive decline in older adults.
Interventions using a cognitive training paradigm called the Useful Field of View (UFOV) task have shown to be efficacious in slowing cognitive decline. However, no studies have looked at the engagement of functional networks during UFOV task completion. The current study aimed to (a) assess if regions activated during the UFOV fMRI task were functionally connected and related to task performance (henceforth called the UFOV network), (b) compare connectivity of the UFOV network to 7 resting-state functional connectivity networks in predicting proximal (UFOV) and near-transfer (Double Decision) performance, and (c) explore the impact of network segregation between higher-order networks and UFOV performance.
Participants and Methods:
336 healthy older adults (mean age=71.6) completed the UFOV fMRI task in a Siemens 3T scanner. UFOV fMRI accuracy was calculated as the number of correct responses divided by 56 total trials. Double Decision performance was calculated as the average presentation time of correct responses in log ms, with lower scores equating to better processing speed. Structural and functional MRI images were processed using the default pre-processing pipeline within the CONN toolbox. The Artifact Rejection Toolbox was set at a motion threshold of 0.9mm and participants were excluded if more than 50% of volumes were flagged as outliers. To assess connectivity of regions associated with the UFOV task, we created 10 spherical regions of interest (ROIs) a priori using the WFU PickAtlas in SPM12. These include the bilateral pars triangularis, supplementary motor area, and inferior temporal gyri, as well as the left pars opercularis, left middle occipital gyrus, right precentral gyrus and right superior parietal lobule. We used a weighted ROI-to-ROI connectivity analysis to model task-based within-network functional connectivity of the UFOV network, and its relationship to UFOV accuracy. We then used weighted ROI-to-ROI connectivity analysis to compare the efficacy of the UFOV network versus 7 resting-state networks in predicting UFOV fMRI task performance and Double Decision performance. Finally, we calculated network segregation among higher order resting state networks to assess its relationship with UFOV accuracy. All functional connectivity analyses were corrected at a false discovery threshold (FDR) at p<0.05.
Results:
ROI-to-ROI analysis showed significant within-network functional connectivity among the 10 a priori ROIs (UFOV network) during task completion (all pFDR<.05). After controlling for covariates, greater within-network connectivity of the UFOV network associated with better UFOV fMRI performance (pFDR=.008). Regarding the 7 resting-state networks, greater within-network connectivity of the CON (pFDR<.001) and FPCN (pFDR=. 014) were associated with higher accuracy on the UFOV fMRI task. Furthermore, greater within-network connectivity of only the UFOV network associated with performance on the Double Decision task (pFDR=.034). Finally, we assessed the relationship between higher-order network segregation and UFOV accuracy. After controlling for covariates, no significant relationships between network segregation and UFOV performance remained (all p-uncorrected>0.05).
Conclusions:
To date, this is the first study to assess task-based functional connectivity during completion of the UFOV task. We observed that coherence within 10 a priori ROIs significantly predicted UFOV performance. Additionally, enhanced within-network connectivity of the UFOV network predicted better performance on the Double Decision task, while conventional resting-state networks did not. These findings provide potential targets to optimize efficacy of UFOV interventions.
Cognitive training using a visual speed-of-processing task, called the Useful Field of View (UFOV) task, reduced dementia risk and reduced decline in activities of daily living at a 10-year follow-up in older adults. However, there is variability in the level of cognitive gains after cognitive training across studies. One potential explanation for this variability could be moderating factors. Prior studies suggest variables moderating cognitive training gains share features of the training task. Learning trials of the Hopkins Verbal Learning Test-Revised (HVLT-R) and Brief Visuospatial Memory Test-Revised (BVMT-R) recruit similar cognitive abilities and have overlapping neural correlates with the UFOV task and speed-ofprocessing/working memory tasks and therefore could serve as potential moderators. Exploring moderating factors of cognitive training gains may boost the efficacy of interventions, improve rigor in the cognitive training literature, and eventually help provide tailored treatment recommendations. This study explored the association between the HVLT-R and BVMT-R learning and the UFOV task, and assessed the moderation of HVLT-R and BVMT-R learning on UFOV improvement after a 3-month speed-ofprocessing/attention and working memory cognitive training intervention in cognitively healthy older adults.
Participants and Methods:
75 healthy older adults (M age = 71.11, SD = 4.61) were recruited as part of a larger clinical trial through the Universities of Florida and Arizona. Participants were randomized into a cognitive training (n=36) or education control (n=39) group and underwent a 40-hour, 12-week intervention. Cognitive training intervention consisted of practicing 4 attention/speed-of-processing (including the UFOV task) and 4 working memory tasks. Education control intervention consisted of watching 40-minute educational videos. The HVLT-R and BVMT-R were administered at the pre-intervention timepoint as part of a larger neurocognitive battery. The learning ratio was calculated as: trial 3 total - trial 1 total/12 - trial 1 total. UFOV performance was measured at pre- and post-intervention time points via the POSIT Brain HQ Double Decision Assessment. Multiple linear regressions predicted baseline Double Decision performance from HVLT-R and BVMT-R learning ratios controlling for study site, age, sex, and education. A repeated measures moderation analysis assessed the moderation of HVLT-R and BVMT-R learning ratio on Double Decision change from pre- to post-intervention for cognitive training and education control groups.
Results:
Baseline Double Decision performance significantly associated with BVMT-R learning ratio (β=-.303, p=.008), but not HVLT-R learning ratio (β=-.142, p=.238). BVMT-R learning ratio moderated gains in Double Decision performance (p<.01); for each unit increase in BVMT-R learning ratio, there was a .6173 unit decrease in training gains. The HVLT-R learning ratio did not moderate gains in Double Decision performance (p>.05). There were no significant moderations in the education control group.
Conclusions:
Better visuospatial learning was associated with faster Double Decision performance at baseline. Those with poorer visuospatial learning improved most on the Double Decision task after training, suggesting that healthy older adults who perform below expectations may show the greatest training gains. Future cognitive training research studying visual speed-of-processing interventions should account for differing levels of visuospatial learning at baseline, as this could impact the magnitude of training outcomes.
Aging is associated with disruptions in functional connectivity within the default mode (DMN), frontoparietal control (FPCN), and cingulo-opercular (CON) resting-state networks. Greater within-network connectivity predicts better cognitive performance in older adults. Therefore, strengthening network connectivity, through targeted intervention strategies, may help prevent age-related cognitive decline or progression to dementia. Small studies have demonstrated synergistic effects of combining transcranial direct current stimulation (tDCS) and cognitive training (CT) on strengthening network connectivity; however, this association has yet to be rigorously tested on a large scale. The current study leverages longitudinal data from the first-ever Phase III clinical trial for tDCS to examine the efficacy of an adjunctive tDCS and CT intervention on modulating network connectivity in older adults.
Participants and Methods:
This sample included 209 older adults (mean age = 71.6) from the Augmenting Cognitive Training in Older Adults multisite trial. Participants completed 40 hours of CT over 12 weeks, which included 8 attention, processing speed, and working memory tasks. Participants were randomized into active or sham stimulation groups, and tDCS was administered during CT daily for two weeks then weekly for 10 weeks. For both stimulation groups, two electrodes in saline-soaked 5x7 cm2 sponges were placed at F3 (cathode) and F4 (anode) using the 10-20 measurement system. The active group received 2mA of current for 20 minutes. The sham group received 2mA for 30 seconds, then no current for the remaining 20 minutes.
Participants underwent resting-state fMRI at baseline and post-intervention. CONN toolbox was used to preprocess imaging data and conduct region of interest (ROI-ROI) connectivity analyses. The Artifact Detection Toolbox, using intermediate settings, identified outlier volumes. Two participants were excluded for having greater than 50% of volumes flagged as outliers. ROI-ROI analyses modeled the interaction between tDCS group (active versus sham) and occasion (baseline connectivity versus postintervention connectivity) for the DMN, FPCN, and CON controlling for age, sex, education, site, and adherence.
Results:
Compared to sham, the active group demonstrated ROI-ROI increases in functional connectivity within the DMN following intervention (left temporal to right temporal [T(202) = 2.78, pFDR < 0.05] and left temporal to right dorsal medial prefrontal cortex [T(202) = 2.74, pFDR < 0.05]. In contrast, compared to sham, the active group demonstrated ROI-ROI decreases in functional connectivity within the FPCN following intervention (left dorsal prefrontal cortex to left temporal [T(202) = -2.96, pFDR < 0.05] and left dorsal prefrontal cortex to left lateral prefrontal cortex [T(202) = -2.77, pFDR < 0.05]). There were no significant interactions detected for CON regions.
Conclusions:
These findings (a) demonstrate the feasibility of modulating network connectivity using tDCS and CT and (b) provide important information regarding the pattern of connectivity changes occurring at these intervention parameters in older adults. Importantly, the active stimulation group showed increases in connectivity within the DMN (a network particularly vulnerable to aging and implicated in Alzheimer’s disease) but decreases in connectivity between left frontal and temporal FPCN regions. Future analyses from this trial will evaluate the association between these changes in connectivity and cognitive performance post-intervention and at a one-year timepoint.
The National Institutes of Health-Toolbox cognition battery (NIH-TCB) is widely used in cognitive aging studies and includes measures in cognitive domains evaluated for dimensional structure and psychometric properties in prior research. The present study addresses a current literature gap by demonstrating how NIH-TCB integrates into a battery of traditional clinical neuropsychological measures. The dimensional structure of NIH-TCB measures along with conventional neuropsychological tests is assessed in healthy older adults.
Participants and Methods:
Baseline cognitive data were obtained from 327 older adults. The following measures were collected: NIH-Toolbox cognitive battery, Controlled Oral Word Association (COWA) letter and animals tests, Wechsler Test of Adult Reading (WTAR), Stroop Color-Word Interference Test, Paced Auditory Serial Addition Test (PASAT), Brief Visuospatial Memory Test (BVMT), Letter-Number Sequencing (LNS), Hopkins Verbal Learning Test (HVLT), Trail Making Test A&B, Digit Span. Hmisc, psych, and GPARotation packages for R were used to conduct exploratory factor analyses (EFA). A 5-factor solution was conducted followed by a 6-factor solution. Promax rotation was used for both EFA models.
Results:
The 6-factor EFA solution is reported here. Results indicated the following 6 factors: working memory (Digit Span forward, backward, and sequencing, PASAT trials 1 and 2, NIH-Toolbox List Sorting, LNS), speed/executive function (Stroop color naming, word reading, and color-word interference, NIH-Toolbox Flanker, Dimensional Change, and Pattern Comparison, Trail Making Test A&B), verbal fluency (COWA letters F-A-S), crystallized intelligence (WTAR, NIH-Toolbox Oral Recognition and Picture Vocabulary), visual memory (BVMT immediate and delayed), and verbal memory (HVLT immediate and delayed. COWA animals and NIH-Toolbox Picture Sequencing did not adequately load onto any EFA factor and were excluded from the subsequent CFA.
Conclusions:
Findings indicate that in a sample of healthy older adults, these collected measures and those obtained through the NIH-Toolbox battery represent 6 domains of cognitive function. Results suggest that in this sample, picture sequencing and COWA animals did not load adequately onto the factors created from the rest of the measures collected. These findings should assist in interpreting future research using combined NIH-TCB and neuropsychological batteries to assess cognition in healthy older adults.
Nonpathological aging has been linked to decline in both verbal and visuospatial memory abilities in older adults. Disruptions in resting-state functional connectivity within well-characterized, higherorder cognitive brain networks have also been coupled with poorer memory functioning in healthy older adults and in older adults with dementia. However, there is a paucity of research on the association between higherorder functional connectivity and verbal and visuospatial memory performance in the older adult population. The current study examines the association between resting-state functional connectivity within the cingulo-opercular network (CON), frontoparietal control network (FPCN), and default mode network (DMN) and verbal and visuospatial learning and memory in a large sample of healthy older adults. We hypothesized that greater within-network CON and FPCN functional connectivity would be associated with better immediate verbal and visuospatial memory recall. Additionally, we predicted that within-network DMN functional connectivity would be associated with improvements in delayed verbal and visuospatial memory recall. This study helps to glean insight into whether within-network CON, FPCN, or DMN functional connectivity is associated with verbal and visuospatial memory abilities in later life.
Participants and Methods:
330 healthy older adults between 65 and 89 years old (mean age = 71.6 ± 5.2) were recruited at the University of Florida (n = 222) and the University of Arizona (n = 108). Participants underwent resting-state fMRI and completed verbal memory (Hopkins Verbal Learning Test - Revised [HVLT-R]) and visuospatial memory (Brief Visuospatial Memory Test - Revised [BVMT-R]) measures. Immediate (total) and delayed recall scores on the HVLT-R and BVMT-R were calculated using each test manual’s scoring criteria. Learning ratios on the HVLT-R and BVMT-R were quantified by dividing the number of stimuli (verbal or visuospatial) learned between the first and third trials by the number of stimuli not recalled after the first learning trial. CONN Toolbox was used to extract average within-network connectivity values for CON, FPCN, and DMN. Hierarchical regressions were conducted, controlling for sex, race, ethnicity, years of education, number of invalid scans, and scanner site.
Results:
Greater CON connectivity was significantly associated with better HVLT-R immediate (total) recall (ß = 0.16, p = 0.01), HVLT-R learning ratio (ß = 0.16, p = 0.01), BVMT-R immediate (total) recall (ß = 0.14, p = 0.02), and BVMT-R delayed recall performance (ß = 0.15, p = 0.01). Greater FPCN connectivity was associated with better BVMT-R learning ratio (ß = 0.13, p = 0.04). HVLT-R delayed recall performance was not associated with connectivity in any network, and DMN connectivity was not significantly related to any measure.
Conclusions:
Connectivity within CON demonstrated a robust relationship with different components of memory function as well across verbal and visuospatial domains. In contrast, FPCN only evidenced a relationship with visuospatial learning, and DMN was not significantly associated with memory measures. These data suggest that CON may be a valuable target in longitudinal studies of age-related memory changes, but also a possible target in future non-invasive interventions to attenuate memory decline in older adults.
Annual prevalences of antimicrobial resistance among urine isolates (3,913 Escherichia coli isolates and 1,736 Klebsiella pneumoniae isolates) from home-based primary care patients with dementia were high between 2014 and 2018 (ciprofloxacin, 18%–23% and 5%–7%, respectively; multidrug resistance, 9%–11% and 5%–6%, respectively). Multidrug resistance varied by region. Additional studies of antimicrobial resistance in home-care settings are needed.
This article presents reflections from 12 experts on language learners strategy (LLS) research. They were asked to offer their reflections in one of their domains of expertise, linking research into LLS with successful language learning and use practices. In essence, they were called upon to provide a review of recent scholarship by identifying areas where results of research had already led to the enhancement of learner strategy use, as well as to describe ongoing and future research efforts intended to enhance the strategy domain. The LLS areas dealt with include theory building, the dynamics of delivering strategy instruction (SI), meta-analyses of SI, learner diversity, SI for young language learners, SI for fine-tuning the comprehension and production of academic-level, grammar strategies at the macro and micro levels, lessons learned from many years of LLS research in Greece, the past and future roles of technology aimed at enhancing language learning, and applications of LLS in content instruction. This review is intended to provide the field with an updated statement as to where we have been, where we are now, and where we need to go. Ideally, it will provide ideas for future studies.
Previous research has shown that interruptions can lead to delays and errors on the interrupted task. Such research, however, seldom considers whether interruptions cause a change in how information is processed. The central question of this research is to determine whether an interruption causes a processing change. We investigate this question in a decision-making paradigm well-suited for examining the decision-making process. Participants are asked to select from a set of risky gambles, each with multiple possible stochastic outcomes. The information gathering process is measured using a mouse-click paradigm. Consistent with past work, interruptions did incur a cost: An interruption increased the time and the amount of information needed to make a decision. Furthermore, after an interruption, participants did seem to partially “restart” the task. Importantly, however, there was no evidence that the information gathering pattern was changed by an interruption. There was also no overall cost to the interruption in terms of choice outcome. These results are consistent with the idea that participants recall a subset of pre-interruption information, which was then incorporated into post-interruption processing.
South Africa has embarked on major health policy reform to deliver universal health coverage through the establishment of National Health Insurance (NHI). The aim is to improve access, remove financial barriers to care, and enhance care quality. Health technology assessment (HTA) is explicitly identified in the proposed NHI legislation and will have a prominent role in informing decisions about adoption and access to health interventions and technologies. The specific arrangements and approach to HTA in support of this legislation are yet to be determined. Although there is currently no formal national HTA institution in South Africa, there are several processes in both the public and private healthcare sectors that use elements of HTA to varying extents to inform access and resource allocation decisions. Institutions performing HTAs or related activities in South Africa include the National and Provincial Departments of Health, National Treasury, National Health Laboratory Service, Council for Medical Schemes, medical scheme administrators, managed care organizations, academic or research institutions, clinical societies and associations, pharmaceutical and devices companies, private consultancies, and private sector hospital groups. Existing fragmented HTA processes should coordinate and conform to a standardized, fit-for-purpose process and structure that can usefully inform priority setting under NHI and for other decision makers. This transformation will require comprehensive and inclusive planning with dedicated funding and regulation, and provision of strong oversight mechanisms and leadership.
We prove that for any prime power $q\notin \{3,4,5\}$, the cubic extension $\mathbb {F}_{q^{3}}$ of the finite field $\mathbb {F}_{q}$ contains a primitive element $\xi $ such that $\xi +\xi ^{-1}$ is also primitive, and $\operatorname {\mathrm {Tr}}_{\mathbb {F}_{q^{3}}/\mathbb {F}_{q}}(\xi )=a$ for any prescribed $a\in \mathbb {F}_{q}$. This completes the proof of a conjecture of Gupta et al. [‘Primitive element pairs with one prescribed trace over a finite field’, Finite Fields Appl.54 (2018), 1–14] concerning the analogous problem over an extension of arbitrary degree $n\ge 3$.