We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While global financial capital is abundant, it flows into corporate investments and real estate rather than climate change actions in cities. Political will and public pressure are crucial to redirecting funds. Studies of economic impacts underestimate the costs of climate disasters, especially in cities, so they undermine political commitments while understating potential climate-related returns. The shift of corporate approaches towards incorporating environmental, social, and governance (ESG) impacts offers promise for private-sector climate investments but are recently contested. Institutional barriers remain at all levels, particularly in African cities. Since the Global North controls the world's financial markets, new means of increasing funding for the Global South are needed, especially for adaptation. Innovative financial instruments and targeted use of environmental insurance tools can upgrade underdeveloped markets and align urban climate finance with ESG frameworks. These approaches, however, require climate impact data collection, programs to improve cities' and countries' creditworthiness, and trainings. This title is also available as open access on Cambridge Core.
Bronze Age–Early Iron Age tin ingots recovered from four Mediterranean shipwrecks off the coasts of Israel and southern France can now be provenanced to tin ores in south-west Britain. These exceptionally rich and accessible ores played a fundamental role in the transition from copper to full tin-bronze metallurgy across Europe and the Mediterranean during the second millennium BC. The authors’ application of a novel combination of three independent analyses (trace element, lead and tin isotopes) to tin ores and artefacts from Western and Central Europe also provides the foundation for future analyses of the pan-continental tin trade in later periods.
Objectives/Goals: Research supports the use of music to improve the care and well-being of adults living with dementia; however, the practice and implementation of music in elder care communities is not regulated. The goal of this qualitative study was to survey elder care communities in Northeast Kansas to determine the use of music with people living with dementia. Methods/Study Population: We interviewed staff (n = 10) at five elder care communities in the Kansas City Metro area and observed musical activities and artifacts in shared living spaces within each community. Interview questions included details of the frequency and purpose of using music, who determined which music to use, and any effects, positive or negative, the interviewee believed to be associated with the use of music. Musical events, visiting musicians or music therapists leading group sing-alongs were observed at two communities, and music-related activities led by staff were observed at two others. Results/Anticipated Results: Music was used in some way at each of the five communities. Each location had recorded music available to residents in the shared living spaces, and most had a piano in the main lounge area. During the sing-along and music-related activities, residents were observed singing along to songs from memory, engaging with one another and the group leader and smiling. Staff employed by each community varied in their level of musical training and experience, from none to a full-time music therapist in residence. Staff interviewed said they believed music was helpful to aid memory recall, reduce anxiety, and to engage interest. Interestingly, a music therapist at one site also described how music during mealtimes created too much of a distraction for residents and interfered with dietary care. Discussion/Significance of Impact: It is clear from both the staff interviews and direct observations of musical activities that music is important to consider for people living with dementia in care communities. Guidelines for implementation and minimum standards would be helpful to ensure all care community residents can experience benefits highlighted by staff in this study.
We present the Sydney Radio Star Catalogue, a new catalogue of stars detected at megahertz to gigahertz radio frequencies. It consists of 839 unique stars with 3 405 radio detections, more than doubling the previously known number of radio stars. We have included stars from large area searches for radio stars found using circular polarisation searches, cross-matching, variability searches, and proper motion searches as well as presenting hundreds of newly detected stars from our search of Australian SKA Pathfinder observations. The focus of this first version of the catalogue is on objects detected in surveys using SKA precursor and pathfinder instruments; however, we will expand this scope in future versions. The 839 objects in the Sydney Radio Star Catalogue are distributed across the whole sky and range from ultracool dwarfs to Wolf-Rayet stars. We demonstrate that the radio luminosities of cool dwarfs are lower than the radio luminosities of more evolved sub-giant and giant stars. We use X-ray detections of 530 radio stars by the eROSITA soft X-ray instrument onboard the Spectrum Roentgen Gamma spacecraft to show that almost all of the radio stars in the catalogue are over-luminous in the radio, indicating that the majority of stars at these radio frequencies are coherent radio emitters. The Sydney Radio Star Catalogue can be found in Vizier or at https://radiostars.org.
Fetal brain size is decreased in some children with complex CHDs, and the distribution of blood and accompanying oxygen and nutrients is regionally skewed from early fetal life dependent on the CHD. In transposition of the great arteries, deoxygenated blood preferentially runs to the brain, whereas the more oxygenated blood is directed towards the lungs and the abdomen. Knowledge of whether this impacts intrauterine organ development is limited. We investigated lung, liver, and total intracranial volume in fetuses with transposition of the great arteries using MRI.
Eight fetuses with dextro-transposition and without concomitant disease or chromosomal abnormalities and 42 fetuses without CHD or other known diseases were scanned once or twice at gestational age 30 through 39 weeks. The MRI scans were conducted on a 1.5T system, using a 2D balanced steady-state free precession sequence. Slices acquired covered the entire fetus, slice thickness was 10 mm, pixel size 1.5 × 1.5 mm, and scan duration was 30 sec.
The mean lung z score was significantly larger in fetuses with transposition compared with those without a CHD; mean difference is 1.24, 95% CI:(0.59;1.89), p < 0.001. The lung size, corrected for estimated fetal weight, was larger than in the fetuses without transposition; mean difference is 8.1 cm3/kg, 95% CI:(2.5;13.7 cm3/kg), p = 0.004.
In summary, fetuses with dextro-transposition of the great arteries had both absolute and relatively larger lung volumes than those without CHD. No differences were seen in liver and total intracranial volume. Despite the small number of cases, the results are interesting and warrant further investigation.
Background: External comparisons of antimicrobial use (AU) may be more informative if adjusted for encounter characteristics. Optimal methods to define input variables for encounter-level risk-adjustment models of AU are not established. Methods: This retrospective analysis of electronic health record data included 50 US hospitals in 2020-2021. We used NHSN definitions for all antibacterials days of therapy (DOT), including adult and pediatric encounters with at least 1 day present in inpatient locations. We assessed 4 methods to define input variables: 1) diagnosis-related group (DRG) categories by Yu et al., 2) adjudicated Elixhauser comorbidity categories by Goodman et al., 3) all Clinical Classification Software Refined (CCSR) diagnosis and procedure categories, and 4) adjudicated CCSR categories where codes not appropriate for AU risk-adjustment were excluded by expert consensus, requiring review of 867 codes over 4 months to attain consensus. Data were split randomly, stratified by bed size as follows: 1) training dataset including two-thirds of encounters among two-thirds of hospitals; 2) internal testing set including one-third of encounters within training hospitals, and 3) external testing set including the remaining one-third of hospitals. We used a gradient-boosted machine (GBM) tree-based model and two-staged approach to first identify encounters with zero DOT, then estimate DOT among those with >0.5 probability of receiving antibiotics. Accuracy was assessed using mean absolute error (MAE) in testing datasets. Correlation plots compared model estimates and observed DOT among testing datasets. The top 20 most influential variables were defined using modeled variable importance. Results: Our datasets included 629,445 training, 314,971 internal testing, and 419,109 external testing encounters. Demographic data included 41% male, 59% non-Hispanic White, 25% non-Hispanic Black, 9% Hispanic, and 5% pediatric encounters. DRG was missing in 29% of encounters. MAE was lower in pediatrics as compared to adults, and lowest for models incorporating CCSR inputs (Figure 1). Performance in internal and external testing was similar, though Goodman/Elixhauser variable strategies were less accurate in external testing and underestimated long DOT outliers (Figure 2). Agnostic and adjudicated CCSR model estimates were highly correlated; their influential variables lists were similar (Figure 3). Conclusion: Larger numbers of CCSR diagnosis and procedure inputs improved risk-adjustment model accuracy compared with prior strategies. Variable importance and accuracy were similar for agnostic and adjudicated approaches. However, maintaining adjudications by experts would require significant time and potentially introduce personal bias. If findings are confirmed, the need for expert adjudication of input variables should be reconsidered.
Disclosure: Elizabeth Dodds Ashley: Advisor- HealthTrackRx. David J Weber: Consultant on vaccines: Pfizer; DSMB chair: GSK; Consultant on disinfection: BD, GAMA, PDI, Germitec
Introduction: Second-generation antipsychotics are widely used in psychiatry but are associated with weight gain. Obesity is more prevalent in mental illness and may contribute to the mortality gap. Non-pharmacological management of antipsychotic-induced weight gain (AIWG) has limited success whilst pharmacological treatment typically involves antidiabetic medications that psychiatrists have less experience with. Recent developments in the field have shown promise with using centrally-acting opioid receptor antagonists (CORAs) at treating AIWG.
Objective: Review and synthesise the available RCT evidence on the efficacy of CORAs at treating AIWG.
Methods
Methodology: Four databases (Medline, Embase, PsycINFO, Cochrane) were searched, from database inception to present, for RCTs using CORAs (naloxone, naltrexone, samidorphan) to reduce AIWG. Our primary outcome sought was weight change in kilograms, with secondary outcomes of change in percentage of body weight, waist circumference and 7% or 10% weight change thresholds. We used random-effects meta-analysis due to study heterogeneity.
Results
A total of 450 articles were found (319 post-deduplication), of which seven met criteria (samidorphan = 4, naltrexone = 3, naloxone = 0) including n = 1,416 patients. On meta-analysis, change in body weight (kg) for CORAs as a class was statistically significant (RE = 1.37 kg; 95% CI: 0.51, 2.24). However, change in BMI was not statistically significant (RE = 0.61kg/m2; 95% CI: −0.56, 1.78). Remaining analysis was only available for samidorphan, which showed statistically significant improvement in change in body weight (%) (RE = 1.81%; 95% CI: 1.07, 2.55), absolute risk of weight gain ≥7% (RE = 12.41%; 95% CI: 6.55, 18.27), absolute risk of weight gain ≥10% (RE = 10.83%; 95% CI: 5.46, 16.21), and change in waist circumference (RE = 1.50 cm; 95% CI: 0.32, 2.67).
Conclusion
Evidence is strongest for samidorphan, though CORAs as a class remains poorly researched and the benefits are modest. Additionally, samidorphan is currently only available in the combination medication olanzapine-samidorphan and the literature reflects this. Further research is needed to examine its efficacy in AIWG from other antipsychotics.
Ward rounds are complex clinical interactions crucial in delivering high-quality, safe, and timely patient care. They serve as a platform for the multidisciplinary team to collaboratively assess a patient's condition and actively involve the patient and their caregivers in shared decision-making to formulate a care plan. Ward rounds involve an intersection of factors worthy of consideration separate from the wider literature on inpatient experience and multidisciplinary team meetings. With this review our primary aim is to systematically identify what methods and perspectives researchers are using to understand ward rounds.
Methods
The databases searched were Medline, CINAHL, British Nursing Index, PsychInfo, and ASSIA as well as reference and citation checking. The search terms used were psychiatr* AND (ward round OR “multi disciplinary team meeting” OR “clinical team meeting”). Studies were included if they were peer reviewed, included primary research on psychiatric inpatient ward rounds in which patients are participants with no restriction on the type of ward or hospital, patient group, country or methodology.
Results
224 records were retrieved and screened from the database search and 10 from other sources. 35 full texts were reviewed for eligibility and 26 included in the review. 16 studies had no particular theoretical perspective, 2 were constructivist, 2 critical realist, 2 lean methodology, 1 systems research, 1 phenomenological, 1 trauma informed and 1 critical theory. 9 focussed on patient experience, 5 ward round structure, 3 on power relationships, 3 on efficiency, 2 on shared decision making and 4 had a unique focus. Though often not explicit, critical theory influenced discussion of power is common in papers focused on patient experience and ward round structure. Cross-sectional surveys, interviews, focus groups and audit cycles were the most common methods. Key themes which emerge are anxiety provoked by ward rounds, preparation and communication, and the negotiation of power structures. Key tensions identified include being multidisciplinary versus overcrowding, efficiency versus personalisation and reliability versus responsiveness.
Conclusion
For a central part of inpatient psychiatric practice there is a limited range of research on psychiatric ward rounds. The influence of critical theories’ focus on power was widespread with limited representation of other theoretical perspectives and concerns. There was no research using experimental methods, but there was some implementation research. Key tensions are highlighted which services may wish to consider when revisiting ways of working on inpatient psychiatric wards.
Many available facepiece filtering respirators contain ferromagnetic components, which may cause significant problems in the magnetic resonance imaging (MRI) environment. We conducted a randomized crossover trial to assess the effectiveness, usability, and comfort of 3 types of respirators, judged to be “conditionally MRI safe” with an aluminum nosepiece (Halyard 46727 duckbill-type respirators and Care Essentials MSK-002 bifold cup-type respirators) or “MRI safe” completely metal free (Eagle AG2200 semirigid cup-type respirators).
Design and setting:
We recruited 120 participants to undergo a quantitative fit test (QNFT) on each of the 3 respirators in a randomized order. Participants then completed a usability and comfort assessment of each respirator.
Results:
There were significant differences in the QNFT pass rates (51% for Halyard 46727, 73% for Care Essentials MSK-002, and 86% for Eagle AG2200, P < .001). The first-time fit test pass rate and overall fit factor were significantly higher for Eagle AG2200 compared with the other 2 respirators. Eagle AG2200 scored the lowest ratings in the ease of use and overall comfort. There were no significant differences in other modalities, including the seal rating, breathability, firmness, and overall assessment.
Conclusions:
Our study supports the utility of the Eagle AG2200 and Care Essentials MSK-002 respirators for healthcare professionals working in an MRI environment, based on their high QNFT pass rates and reasonably good overall usability and comfort scores. Eagle AG2200 is unique because of its metal-free construction. However, its comparatively lower usability and comfort ratings raise questions about practicality, which may be improved by greater user training.
OBJECTIVES/GOALS: Adoption of the Observational Medical Outcomes Partnership (OMOP) common data model promises to transform large-scale observational health research. However, there are diverse challenges for operationalizing OMOP in terms of interoperability and technical skills among coordinating centers throughout the US. METHODS/STUDY POPULATION: A team from the Critical Path Institute (C-Path) collaborated with the informatics team members at Johns Hopkins to provide technical support to participating sites as part of the Extract, Transform, and Load (ETL) process linking existing concepts to OMOP concepts. Health systems met regularly via teleconference to review challenges and progress in ETL process. Sites were responsible for performing the local ETL process with assistance and securely provisioning de-identified data as part of the CURE ID program. RESULTS/ANTICIPATED RESULTS: More than twenty health systems participated in the CURE ID effort.Laboratory measures, basic demographics, disease diagnoses and problem list were more easily mapped to OMOP concepts by CURE ID partner institutions. Outcomes, social determinants of health, medical devices, and specific treatments were less easily characterized as part of the project. Concepts within the medical record presented very different technical challenges in terms of representation. There is a lack of standardization in OMOP implementation even among centers using the same electronic medical health record. Readiness to adopt OMOP varied across the institutions who participated. Health systems achieved variable level of coverage using OMOP medical concepts as part of the initiative. DISCUSSION/SIGNIFICANCE: Adoption of OMOP involves local stakeholder knowledge and implementation. Variable complexity of health concepts contributed to variable coverage. Documentation and support require extensive time and effort. Open-source software can be technically challenging. Interoperability of secure data systems presents unique problems.
Preemergence applications of mesotrione, an herbicide that inhibits 4-hydroxyphenolpyruvate dioxygenase (HPPD), have recently gained regulatory approval in soybean varieties with appropriate traits. Giant ragweed is an extremely competitive broadleaf weed, and biotypes resistant to acetolactate synthase inhibitors (ALS-R) can be particularly difficult to manage with soil-residual herbicides in soybean production. This study investigated control of giant ragweed from preemergence applications of cloransulam (32 g ai ha–1), metribuzin (315 g ai ha–1), and S-metolachlor (1,600 g ai ha–1) in a factorial design with and without mesotrione (177 g ai ha–1) at two different sites over 2 yr. Treatments with mesotrione were also compared with two commercial premix products: sulfentrazone (283 g ai ha–1) and cloransulam (37 g ai ha–1), and chlorimuron (19 g ai ha–1), flumioxazin (69 g ai ha–1), and pyroxasulfone (87 g ai ha–1). At 42 d after planting, control and biomass reduction of giant ragweed were greater in treatments with mesotrione than any treatment without mesotrione. Giant ragweed biomass was reduced by 84% in treatments with mesotrione, whereas treatments without mesotrione did not reduce biomass relative to the nontreated. Following these preemergence applications, sequential herbicide treatments utilizing postemergence applications of glufosinate (655 g ai ha–1) plus fomesafen (266 g ai ha–1) and S-metolachlor (1,217 g ai ha–1) resulted in at least 97% control of giant ragweed at 42 d after planting, which was greater than sequential applications of glufosinate alone in 3 of 4 site-years. Preemergence applications of mesotrione can be an impactful addition to soybean herbicide programs designed to manage giant ragweed, with the potential to improve weed control and delay the onset of herbicide resistance by providing an additional effective herbicide site of action.
This study investigates the impact of primary care utilisation of a symptom-based head and neck cancer risk calculator (Head and Neck Cancer Risk Calculator version 2) in the post-coronavirus disease 2019 period on the number of primary care referrals and cancer diagnoses.
Methods
The number of referrals from April 2019 to August 2019 and from April 2020 to July 2020 (pre-calculator) was compared with the number from the period January 2021 to August 2022 (post-calculator) using the chi-square test. The patients’ characteristics, referral urgency, triage outcome, Head and Neck Cancer Risk Calculator version 2 score and cancer diagnosis were recorded.
Results
In total, 1110 referrals from the pre-calculator period were compared with 1559 from the post-calculator period. Patient characteristics were comparable for both cohorts. More patients were referred on the cancer pathway in the post-calculator cohort (pre-calculator patients 51.1 per cent vs post-calculator 64.0 per cent). The cancer diagnosis rate increased from 2.7 per cent in the pre-calculator cohort to 3.3 per cent in the post-calculator cohort. A lower rate of cancer diagnosis in the non-cancer pathway occurred in the cohort managed using the Head and Neck Cancer Risk Calculator version 2 (10 per cent vs 23 per cent, p = 0.10).
Conclusion
Head and Neck Cancer Risk Calculator version 2 demonstrated high sensitivity in cancer diagnosis. Further studies are required to improve the predictive strength of the calculator.
Traumatic brain injury is one of several recognized risk factors for cognitive decline and neurodegenerative disease. Currently, risk scores involving modifiable risk/protective factors for dementia have not incorporated head injury history as part of their overall weighted risk calculation. We investigated the association between the LIfestyle for BRAin Health (LIBRA) risk score with odds of mild cognitive impairment (MCI) diagnosis and cognitive function in older former National Football League (NFL) players, both with and without the influence of concussion history.
Participants and Methods:
Former NFL players, ages ≥ 50 (N=1050; mean age=61.1±5.4-years), completed a general health survey including self-reported medical history and ratings of function across several domains. LIBRA factors (weighted value) included cardiovascular disease (+1.0), hypertension (+1.6), hyperlipidemia (+1.4), diabetes (+1.3), kidney disease (+1.1), cigarette use history (+1.5), obesity (+1.6), depression (+2.1), social/cognitive activity (-3.2), physical inactivity (+1.1), low/moderate alcohol use (-1.0), healthy diet (-1.7). Within Group 1 (n=761), logistic regression models assessed the association of LIBRA scores and independent contribution of concussion history with the odds of MCI diagnosis. A modified-LIBRA score incorporated concussion history at the level planned contrasts showed significant associations across concussion history groups (0, 1-2, 3-5, 6-9, 10+). The weighted value for concussion history (+1.9) within the modified-LIBRA score was based on its proportional contribution to dementia relative to other LIBRA risk factors, as proposed by the 2020 Lancet Commission Report on Dementia Prevention. Associations of the modified-LIBRA score with odds of MCI and cognitive function were assessed via logistic and linear regression, respectively, in a subset of the sample (Group 2; n=289) who also completed the Brief Test of Adult Cognition by Telephone (BTACT). Race was included as a covariate in all models.
Results:
The median LIBRA score in the Group 1 was 1.6(IQR= -1, 3.6). Standard and modified-LIBRA median scores were 1.1(IQR= -1.3, 3.3) and 2(IQR= -0.4, 4.6), respectively, within Group 2. In Group 1, LIBRA score was significantly associated with odds of MCI diagnosis (odds ratio[95% confidence interval]=1.27[1.19, 1.28], p <.001). Concussion history provided additional information beyond LIBRA scores and was independently associated with odds of MCI; specifically, odds of MCI were higher among those with 6-9 (Odds Ratio[95% confidence interval]; OR=2.54[1.21, 5.32], p<.001), and 10+ (OR=4.55;[2.21, 9.36], p<.001) concussions, compared with those with no prior concussions. Within Group 2, the modified-LIBRA score was associated with higher odds of MCI (OR=1.61[1.15, 2.25]), and incrementally improved model information (0.04 increase in Nagelkerke R2) above standard LIBRA scores in the same model. Modified-LIBRA scores were inversely associated with BTACT Executive Function (B=-0.53[0.08], p=.002) and Episodic Memory scores (B=-0.53[0.08], p=.002).
Conclusions:
Numerous modifiable risk/protective factors for dementia are reported in former professional football players, but incorporating concussion history may aid the multifactorial appraisal of cognitive decline risk and identification of areas for prevention and intervention. Integration of multi-modal biomarkers will advance this person-centered, holistic approach toward dementia reduction, detection, and intervention.
It has been posited that alcohol use may confound the association between greater concussion history and poorer neurobehavioral functioning. However, while greater alcohol use is positively correlated with neurobehavioral difficulties, the association between alcohol use and concussion history is not well understood. Therefore, this study investigated the cross-sectional and longitudinal associations between cumulative concussion history, years of contact sport participation, and health-related/psychological factors with alcohol use in former professional football players across multiple decades.
Participants and Methods:
Former professional American football players completed general health questionnaires in 2001 and 2019, including demographic information, football history, concussion/medical history, and health-related/psychological functioning. Alcohol use frequency and amount was reported for three timepoints: during professional career (collected retrospectively in 2001), 2001, and 2019. During professional career and 2001 alcohol use frequency included none, 1-2, 3-4, 5-7 days/week, while amount included none, 12, 3-5, 6-7, 8+ drinks/occasion. For 2019, frequency included never, monthly or less, 2-4 times/month, 2-3 times/week, >4 times/week, while amount included none, 1-2, 3-4, 5-6, 7-9, 10+ drinks/occasion. Scores on a screening measure for Alcohol Use Disorder (CAGE) were also available at during professional career and 2001 timepoints. Concussion history was recorded in 2001 and binned into five groups: 0, 1-2, 3-5, 6-9, 10+. Depression and pain interference were assessed via PROMIS measures at all timepoints. Sleep disturbance was assessed in 2001 via separate instrument and with PROMIS Sleep Disturbance in 2019. Spearman’s rho correlations tested associations between concussion history and years of sport participation with alcohol use across timepoints, and whether poor health functioning (depression, pain interference, sleep disturbance) in 2001 and 2019 were associated with alcohol use both within and between timepoints.
Results:
Among the 351 participants (Mage=47.86[SD=10.18] in 2001), there were no significant associations between concussion history or years of contact sport participation with CAGE scores or alcohol use frequency/amount during professional career, 2001, or 2019 (rhos=-.072-.067, ps>.05). In 2001, greater depressive symptomology and sleep disturbance were related to higher CAGE scores (rho=.209, p<.001; rho=.176, p<.001, respectively), while greater depressive symptomology, pain interference, and sleep disturbance were related to higher alcohol use frequency (rho=.176, p=.002; rho=.109, p=.045; rho=.132, p=.013, respectively) and amount/occasion (rho=.215, p<.001; rho=.127, p=.020; rho=.153, p=.004, respectively). In 2019, depressive symptomology, pain interference, and sleep disturbance were not related to alcohol use (rhos=-.047-.087, ps>.05). Between timepoints, more sleep disturbance in 2001 was associated with higher alcohol amount/occasion in 2019 (rho=.115, p=.036).
Conclusions:
Increased alcohol intake has been theorized to be a consequence of greater concussion history, and as such, thought to confound associations between concussion history and neurobehavioral function later in life. Our findings indicate concussion history and years of contact sport participation were not significantly associated with alcohol use cross-sectionally or longitudinally, regardless of alcohol use characterization. While higher levels of depression, pain interference, and sleep disturbance in 2001 were related to greater alcohol use in 2001, they were not associated cross-sectionally in 2019. Results support the need to concurrently address health-related and psychological factors in the implementation of alcohol use interventions for former NFL players, particularly earlier in the sport discontinuation timeline.
Face-to-face administration is the “gold standard” for both research and clinical cognitive assessments. However, many factors may impede or prevent face-to-face assessments, including distance to clinic, limited mobility, eyesight, or transportation. The COVID19 pandemic further widened gaps in access to care and clinical research participation. Alternatives to face-to-face assessments may provide an opportunity to alleviate the burden caused by both the COVID-19 pandemic and longer standing social inequities. The objectives of this study were to develop and assess the feasibility of a telephone- and video-administered version of the Uniform Data Set (UDS) v3 cognitive batteries for use by NIH-funded Alzheimer’s Disease Research Centers (ADRCs) and other research programs.
Participants and Methods:
Ninety-three individuals (M age: 72.8 years; education: 15.6 years; 72% female; 84% White) enrolled in our ADRC were included. Their most recent adjudicated cognitive status was normal cognition (N=44), MCI (N=35), mild dementia (N=11) or other (N=3). They completed portions of the UDSv3 cognitive battery, plus the RAVLT, either by telephone or video-format within approximately 6 months (M:151 days) of their annual in-person visit, where they completed the same in-person cognitive assessments. Some measures were substituted (Oral Trails for TMT; Blind MoCA for MoCA) to allow for phone administration. Participants also answered questions about the pleasantness, difficulty level, and preference for administration mode. Cognitive testers provided ratings of perceived validity of the assessment. Participants’ cognitive status was adjudicated by a group of cognitive experts blinded to most recent inperson cognitive status.
Results:
When results from video and phone modalities were combined, the remote assessments were rated as pleasant as the inperson assessment by 74% of participants. 75% rated the level of difficulty completing the remote cognitive assessment the same as the in-person testing. Overall perceived validity of the testing session, determined by cognitive assessors (video = 92%; phone = 87.5%), was good. There was generally good concordance between test scores obtained remotely and in-person (r = .3 -.8; p < .05), regardless of whether they were administered by phone or video, though individual test correlations differed slightly by mode. Substituted measures also generally correlated well, with the exception of TMT-A and OTMT-A (p > .05). Agreement between adjudicated cognitive status obtained remotely and cognitive status based on in-person data was generally high (78%), with slightly better concordance between video/in-person (82%) vs phone/in-person (76%).
Conclusions:
This pilot study provided support for the use of telephone- and video-administered cognitive assessments using the UDSv3 among individuals with normal cognitive function and some degree of cognitive impairment. Participants found the experience similarly pleasant and no more difficult than inperson assessment. Test scores obtained remotely correlated well with those obtained in person, with some variability across individual tests. Adjudication of cognitive status did not differ significantly whether it was based on data obtained remotely or in-person. The study was limited by its’ small sample size, large test-retest window, and lack of randomization to test-modality order. Current efforts are underway to more fully validate this battery of tests for remote assessment. Funded by: P30 AG072947 & P30 AG049638-05S1
Traumatic brain injury and cardiovascular disease (CVD) are modifiable risk factors for cognitive decline and dementia. Greater concussion history can potentially increase risk for cerebrovascular changes associated with cognitive decline and may compound effects of CVD. We investigated the independent and dynamic effects of CVD/risk factor burden and concussion history on cognitive function and odds of mild cognitive impairment (MCI) diagnoses in older former National Football League (NFL) players.
Participants and Methods:
Former NFL players, ages 50-70 (N=289; mean age=61.02±5.33 years), reported medical history and completed the Brief Test of Adult Cognition by Telephone (BTACT). CVD/risk factor burden was characterized as ordinal (0-3+) based on the sum of the following conditions: coronary artery disease/myocardial infarction, chronic obstructive pulmonary disease, hypertension, hyperlipidemia, sleep apnea, type-I and II diabetes. Cognitive outcomes included BTACT Executive Function and Episodic Memory Composite Z-scores (standardized on age- and education-based normative data), and the presence of physician diagnosed (self-reported) MCI. Concussion history was discretized into five groups: 0, 1-2, 3-5, 6-9, 10+. Linear and logistic regression models were fit to test independent and joint effects of concussion history and CVD burden on cognitive outcomes and odds of MCI. Race (dichotomized as White and Non-white due to sample distribution) was included in models as a covariate.
Results:
Greater CVD burden (unstandardized beta [standard error]; B=-0.10[0.42], p=.013, and race (B=0.622[0.09], p<.001), were associated with lower executive functioning. Compared to those with 0 prior concussions, no significant differences were observed for those with 1-2, 3-5, 6-9, or 10+ prior concussions (ps >.05). Race (B=0.61[.13], p<.001), but not concussion history or CVD burden, was associated with episodic memory. There was a trend for lower episodic memory scores among those with 10+ prior concussion compared to those with no prior concussions (B=-0.49[.25], p=.052). There were no significant differences in episodic memory among those with 1-2, 3-5, or 6-9 prior concussions compared to those with 0 prior concussions (ps>.05). CVD burden (B=0.35[.13], p=.008), race (greater odds in Non-white group; B=0.82[.29], p=.005), and greater concussion history (higher odds of diagnosis in 10+ group compared to those with 0 prior concussions; B=2.19[0.78], p<.005) were associated with higher odds of MCI diagnosis. Significant interaction effects between concussion history and CVD burden were not observed for any outcome (ps >.05).
Conclusions:
Lower executive functioning and higher odds of MCI diagnosis were associated with higher CVD burden and race. Very high concussion history (10+) was selectively associated with higher odds of MCI diagnosis. Reduction of these modifiable factors may mitigate adverse outcomes in older contact sport athletes. In former athletes, consideration of CVD burden is particularly pertinent when assessing executive dysfunction, considered to be a common cognitive feature of traumatic encephalopathy syndrome, as designated by the recent diagnostic criteria. Further research should investigate the social and structural determinants contributing to racial disparities in long-term health outcomes within former NFL players.
There is a pressing need for sensitive, non-invasive indicators of cognitive impairment in those at risk for Alzheimer’s disease (AD). One group at an increased risk for AD is APOEε4 carriers. One study found that cognitively normal APOEε4 carriers are less likely to produce low frequency (i.e., less common) words on semantic fluency tasks relative to non-carriers, but this finding has not yet been replicated. This study aims to replicate these findings within the Wake Forest ADRC clinical core population, and examine whether these findings extend to additional semantic fluency tasks.
Participants and Methods:
This sample includes 221 APOEε4 non-carriers (165 females, 56 males; 190 White, 28 Black/African American, 3 Asian; Mage = 69.55) and 79 APOEε4 carriers (59 females, 20 males; 58 White, 20 Black/African American, 1 Asian; Mage = 65.52) who had been adjudicated as cognitively normal at baseline. Semantic fluency data for both the animal task and vegetable task was scored for total number of items as well as mean lexical frequency (attained via the SUBTLEXus database). Demographic variables and additional cognitive variables (MMSE, MoCA, AMNART scores) were also included from the participants’ baseline visit.
Results:
APOEε4 carriers and non-carriers did not differ on years of education, AMNART scores, or gender (ps > 0.05). APOEε4 carriers were slightly younger and included more Black/African American participants (ps < 0.05). Stepwise linear regression was used to determine the variance in total fluency score and mean lexical frequency accounted for by APOEε4 status after including relevant demographic variables (age, sex, race, years of education, and AMNART score). As expected, demographic variables accounted for significant variance in total fluency score (p < 0.0001). Age accounted for significant variance in total fluency score for both the animal task (ß = -0.32, p <0.0001) and the vegetable task (ß = -0.29, p < 0.0001), but interestingly, not the lexical frequency of words produced. After accounting for demographic variables, APOEε4 status did not account for additional variance in lexical frequency for either fluency task (ps > 0.05). Interestingly, APOEε4 status was a significant predictor of total words for the vegetable semantic fluency task only (ß = 0.13, p = 0.01), resulting in a model that accounted for more variance (R2 = 0.25, F(6, 292) = 16.11, p < 0.0001) in total words than demographic variables alone (R2 = 0.23, F(5, 293) = 17.75, p < 0.0001).
Conclusions:
Unsurprisingly, we found that age, AMNART, and education were significant predictors of total word fluency. One unexpected finding was that age did not predict the lexical frequency - that is - regardless of age, participants tended to retrieve words of the same lexical frequency, which stands in contrast to the notion that retrieval efficiency of infrequent words declines with age. With regard to APOEε4, we did not replicate existing work demonstrating differences in lexical frequency and semantic fluency tasks for ε4 carriers and non-carriers; possibly due to differences in the demographic characteristics of the sample.