We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The First Large Absorption Survey in H i (FLASH) is a large-area radio survey for neutral hydrogen in and around galaxies in the intermediate redshift range $0.4\lt z\lt1.0$, using the 21-cm H i absorption line as a probe of cold neutral gas. The survey uses the ASKAP radio telescope and will cover 24,000 deg$^2$ of sky over the next five years. FLASH breaks new ground in two ways – it is the first large H i absorption survey to be carried out without any optical preselection of targets, and we use an automated Bayesian line-finding tool to search through large datasets and assign a statistical significance to potential line detections. Two Pilot Surveys, covering around 3000 deg$^2$ of sky, were carried out in 2019-22 to test and verify the strategy for the full FLASH survey. The processed data products from these Pilot Surveys (spectral-line cubes, continuum images, and catalogues) are public and available online. In this paper, we describe the FLASH spectral-line and continuum data products and discuss the quality of the H i spectra and the completeness of our automated line search. Finally, we present a set of 30 new H i absorption lines that were robustly detected in the Pilot Surveys, almost doubling the number of known H i absorption systems at $0.4\lt z\lt1$. The detected lines span a wide range in H i optical depth, including three lines with a peak optical depth $\tau\gt1$, and appear to be a mixture of intervening and associated systems. Interestingly, around two-thirds of the lines found in this untargeted sample are detected against sources with a peaked-spectrum radio continuum, which are only a minor (5–20%) fraction of the overall radio-source population. The detection rate for H i absorption lines in the Pilot Surveys (0.3 to 0.5 lines per 40 deg$^2$ ASKAP field) is a factor of two below the expected value. One possible reason for this is the presence of a range of spectral-line artefacts in the Pilot Survey data that have now been mitigated and are not expected to recur in the full FLASH survey. A future paper in this series will discuss the host galaxies of the H i absorption systems identified here.
Edited by
David Mabey, London School of Hygiene and Tropical Medicine,Martin W. Weber, World Health Organization,Moffat Nyirenda, London School of Hygiene and Tropical Medicine,Dorothy Yeboah-Manu, Noguchi Memorial Institute for Medical Research, University of Ghana,Jackson Orem, Uganda Cancer Institute, Kampala,Laura Benjamin, University College London,Michael Marks, London School of Hygiene and Tropical Medicine,Nicholas A. Feasey, Liverpool School of Tropical Medicine
The mortality rate of children less than 5 years of age has decreased by 60% since 1990, with the Millennium Development Goals having been a powerful drive for improvement. However, the reduction has not been evenly distributed throughout the world (UN IGME 2020). Sub-Saharan Africa remains the region with the highest under-5 mortality rate in the world, where 1 child in every 13 dies before celebrating their 5th birthday (UN IGME 2020).
Emerging research has highlighted a relationship between diet and genetics, suggesting that individuals may benefit more from personalised dietary recommendations based on their genetic risk for cardiovascular disease (CVD)(1,2). This current study aims to: (1) Measure knowledge of genetics among healthcare professionals (HCPs) working in CVD, (2) Identify HCPs’ attitudes to using genetic risk to tailor dietary interventions, and (3) Identify perceived barriers and enablers to implementing genetics to tailor dietary interventions. In a mixed-methods study, Australian HCPs (dietitians and AHPRA registered healthcare professionals) working with people with CVD were invited to complete an anonymous online survey (REDCap) and an optional interview. Recruitment occurred through social media and relevant professional organisations. Survey questions were underpinned by the theoretical domains framework(3) and data was synthesised descriptively. Semi-structured interviews were undertaken via Zoom. Interview responses were analysed using a thematic analysis approach using Braun & Clarke methodology(4). Survey responders (n = 63, 89% female, mean age 42 ± 14 years) were primarily dietitians (83%), with ≥ 10 years of experience (56%) and spent at least 20% of their time working with people with CVD (n = 55, 87%). Approximately half of respondents were aware that genetic testing for CVD exists (n = 36) and always assess family history of CVD (n = 31). Few respondents reported using genetic testing (n = 5, 8%) or felt confident interpreting and using genetic testing (n = 7, 11%) in practice. Respondents were interested in incorporating genetics into their practice to tailor dietary advice (n = 44, 70%). Primary barriers to using genetic testing included financial costs to patients and negative implications for some patients. Almost all respondents agreed genetic testing will allow for more targeted and personalised approaches for prevention and management of CVD (94%). From the interviews (n = 15, 87% female, 43 ± 17 years, 87% dietitian), three themes were identified: (1) ‘On the periphery of care’—HCPs are aware of the role of genetics in health and are interested in knowing more, but it is not yet part of usual practice; (2) ‘A piece of the puzzle’—using genetic testing could be a tool to help personalise, prioritise and motivate participants; and (3) ‘Whose role is it?’—There is uncertainty regarding HCP roles and knowing exactly whose role it is to educate patients. Healthcare professionals are interested in using genetics to tailor dietary advice for CVD, but potential implications for patients need to be considered. Upskilling is required to increase their knowledge and confidence in this area. Further clarity regarding HCP roles in patient education is needed before this can be implemented in practice.
The economic burden of migraine is substantial; determining the cost that migraine imposes on the Canadian healthcare system is needed.
Methods:
Administrative data were used to identify adults living with migraine, including chronic migraine (CM) and episodic migraine (EM), and matched controls in Alberta, Canada. One- and two-part generalized linear models with gamma distribution were used to estimate direct healthcare costs (hospitalization, emergency department, ambulatory care, physician visit, prescription medication; reported in 2022 Canadian dollars) of migraine during a 1-year observation period (2017/2018).
Results:
The fully adjusted total mean healthcare cost of migraine (n = 100,502) was 1.5 times (cost ratio: 1.53 [95% CI: 1.50, 1.55]) higher versus matched controls (n = 301,506), with a predicted annual incremental cost of $2,806 (95% CI: $2,664, $2,948) per person. The predicted annual incremental cost of CM and EM was $5,059 (95% CI: $4,836, $5,283) and $669 (95% CI: $512, $827) per person, respectively, compared with matched controls. All healthcare cost categories were greater for migraine (overall, CM and EM) compared with matched controls, with prescription medication the primary cost driver (incremental cost – overall: $1,381 [95% CI: $1,234, $1,529]; CM: $2,057 [95% CI: %1,891, $2,223]; EM: $414 [95% CI: $245, $583] per person per year).
Conclusion:
Persons living with migraine had greater direct healthcare costs than those without. With an estimated migraine prevalence of 8.3%–10.2%, this condition may account for an additional $1.05–1.29 billion in healthcare costs per year in Alberta. Strategies to prevent and effectively manage migraine and associated healthcare costs are needed.
The complexity of movement disorders poses challenges for clinical management and research. Functional imaging with PET or SPECT allows in-vivo assessment of the molecular underpinnings of movement disorders, and biomarkers can aid clinical decision making and understanding of pathophysiology, or determine patient eligibility and endpoints in clinical trials. Imaging targets traditionally include functional processes at the molecular level, typically neurotransmitter systems or brain metabolism, and more recently abnormal protein accumulation, a pathologic hallmark of neurodegenerative diseases. Functional neuroimaging provides complementary information to structural neuroimaging (e.g. anatomic MRI), as molecular/functional changes can present in the absence of, prior to, or alongside structural brain changes. Movement disorder specialists should be aware of the indications, advantages and limitations of molecular functional imaging. An overview is given of functional molecular imaging in movement disorders, covering methodologic background information, typical molecular changes in common movement disorders, and emerging topics with potential for greater future importance.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
The incidence of facial palsy has been rising worldwide, with recent evidence emerging of links to COVID-19 infection. To date, guidance on cost-effective treatments is limited to medication (prednisolone). In terms of physical therapy, neuromuscular retraining (NMR) to restore balanced facial function has been most widely evaluated, but not in terms of cost effectiveness. The added value of telerehabilitation is unknown.
Methods
A multistage technology assessment was conducted, which included the following:
• a national survey of current therapy pathways in the UK and patients’ and clinicians’ views on the benefits and challenges of telerehabilitation;
• a systematic review of clinical effectiveness trials evaluating facial NMR therapy;
• calculation of long-term morbidity costs (national economic burden) based on incidence, patient recovery profiles, health-related quality of life, and national facial palsy treatment costs (valuation of clinical improvements in monetary terms was provided by a national Delphi panel); and
• evaluation of the cost effectiveness of telerehabilitation (remote monitoring wearables) added to current face-to-face NMR delivery.
Results
Nationally, approximately five percent of patients with facial palsy (17% of unresolved cases) are referred for facial NMR. The long-term economic burden associated with unresolved cases is estimated to range from GBP351 (EUR417) to GBP584 (EUR692) million, indicating substantial savings if long-term recovery can be improved. Medical treatment costs are GBP86.34 (EUR102) million per annual cohort, and physical and psychological therapy costs are GBP643,292 (EUR762,561). Economic modeling showed that telerehabilitation was cost effective, producing a health gain and a cost-saving of GBP468 (EUR555) per patient. If scaled to the national level for all patients who do not recover fully, an annual saving of GBP3.075 (EUR3.65) million is possible.
Conclusions
Economic modeling indicates that NMR could improve patient outcomes and reduce costs. The national survey demonstrated that access to NMR therapy services is limited, so introduction of telerehabilitation could improve access for currently underserved populations. Future clinical trials need to incorporate economic evaluations to help inform decision-making.
Diagnostic criteria for major depressive disorder allow for heterogeneous symptom profiles but genetic analysis of major depressive symptoms has the potential to identify clinical and etiological subtypes. There are several challenges to integrating symptom data from genetically informative cohorts, such as sample size differences between clinical and community cohorts and various patterns of missing data.
Methods
We conducted genome-wide association studies of major depressive symptoms in three cohorts that were enriched for participants with a diagnosis of depression (Psychiatric Genomics Consortium, Australian Genetics of Depression Study, Generation Scotland) and three community cohorts who were not recruited on the basis of diagnosis (Avon Longitudinal Study of Parents and Children, Estonian Biobank, and UK Biobank). We fit a series of confirmatory factor models with factors that accounted for how symptom data was sampled and then compared alternative models with different symptom factors.
Results
The best fitting model had a distinct factor for Appetite/Weight symptoms and an additional measurement factor that accounted for the skip-structure in community cohorts (use of Depression and Anhedonia as gating symptoms).
Conclusion
The results show the importance of assessing the directionality of symptoms (such as hypersomnia versus insomnia) and of accounting for study and measurement design when meta-analyzing genetic association data.
Rift propagation, rather than basal melt, drives the destabilization and disintegration of the Thwaites Eastern Ice Shelf. Since 2016, rifts have episodically advanced throughout the central ice-shelf area, with rapid propagation events occurring during austral spring. The ice shelf's speed has increased by ~70% during this period, transitioning from a rate of 1.65 m d−1 in 2019 to 2.85 m d−1 by early 2023 in the central area. The increase in longitudinal strain rates near the grounding zone has led to full-thickness rifts and melange-filled gaps since 2020. A recent sea-ice break out has accelerated retreat at the western calving front, effectively separating the ice shelf from what remained of its northwestern pinning point. Meanwhile, a distributed set of phase-sensitive radar measurements indicates that the basal melting rate is generally small, likely due to a widespread robust ocean stratification beneath the ice–ocean interface that suppresses basal melt despite the presence of substantial oceanic heat at depth. These observations in combination with damage modeling show that, while ocean forcing is responsible for triggering the current West Antarctic ice retreat, the Thwaites Eastern Ice Shelf is experiencing dynamic feedbacks over decadal timescales that are driving ice-shelf disintegration, now independent of basal melt.
Major depressive disorder (MDD) is the leading cause of disability globally, with moderate heritability and well-established socio-environmental risk factors. Genetic studies have been mostly restricted to European settings, with polygenic scores (PGS) demonstrating low portability across diverse global populations.
Methods
This study examines genetic architecture, polygenic prediction, and socio-environmental correlates of MDD in a family-based sample of 10 032 individuals from Nepal with array genotyping data. We used genome-based restricted maximum likelihood to estimate heritability, applied S-LDXR to estimate the cross-ancestry genetic correlation between Nepalese and European samples, and modeled PGS trained on a GWAS meta-analysis of European and East Asian ancestry samples.
Results
We estimated the narrow-sense heritability of lifetime MDD in Nepal to be 0.26 (95% CI 0.18–0.34, p = 8.5 × 10−6). Our analysis was underpowered to estimate the cross-ancestry genetic correlation (rg = 0.26, 95% CI −0.29 to 0.81). MDD risk was associated with higher age (beta = 0.071, 95% CI 0.06–0.08), female sex (beta = 0.160, 95% CI 0.15–0.17), and childhood exposure to potentially traumatic events (beta = 0.050, 95% CI 0.03–0.07), while neither the depression PGS (beta = 0.004, 95% CI −0.004 to 0.01) or its interaction with childhood trauma (beta = 0.007, 95% CI −0.01 to 0.03) were strongly associated with MDD.
Conclusions
Estimates of lifetime MDD heritability in this Nepalese sample were similar to previous European ancestry samples, but PGS trained on European data did not predict MDD in this sample. This may be due to differences in ancestry-linked causal variants, differences in depression phenotyping between the training and target data, or setting-specific environmental factors that modulate genetic effects. Additional research among under-represented global populations will ensure equitable translation of genomic findings.
Understanding disease-modifying therapy (DMT) use and healthcare resource utilization by different geographical areas among people living with multiple sclerosis (pwMS) may identify care gaps that can be used to inform policies and practice to ensure equitable care.
Methods:
Administrative data was used to identify pwMS on April 1, 2017 (index date) in Alberta. DMT use and healthcare resource utilization were compared between those who resided in various geographical areas over a 2-year post-index period; simple logistic regression was applied.
Results:
Among the cohort (n = 12,338), a higher proportion of pwMS who resided in urban areas (versus rural) received ≥ 1 DMT dispensation (32.3% versus 27.4%), had a neurologist (67.7% versus 63.9%), non-neurologist specialist (88.3% versus 82.9%), ambulatory care visit (87.4% versus 85.3%), and MS tertiary clinic visit (59.2% versus 51.7%), and a lower proportion had an emergency department (ED) visit (46.3% versus 62.4%), and hospitalization (20.4% versus 23.0%). Across the provincial health zones, there were variations in DMT selection, and a higher proportion of pwMS who resided in the Calgary health zone, where care is managed by MS tertiary clinic neurologists, had an outpatient visit to a neurologist or MS tertiary clinic versus those who resided in other zones where delivery of MS-related care is more varied.
Conclusions:
Urban/rural inequalities in DMT use and healthcare resource utilization appear to exist among pwMS in Alberta. Findings suggest the exploration of barriers with consequent strategies to increase access to DMTs and provide timely outpatient MS care management, particularly for those pwMS residing in rural areas.
Montmorillonite-based catalysts were compared with an acidic ion-exchange resin of the type used industrially for the production of methyl t-butyl ether (MTBE) from methanol and isobutene or t-butanol. When 1,4-dioxan was used as solvent, Al3+-exchanged montmorillonites had about half the efficiency of the resin Amberlyst 15 at 60°C; they were, however, about twice as efficient at this temperature at Ti3+-montmorillonite or K10, a commercially available acid-treated bentonite. Montmorillonite exchanged with Chlorhydrol solutions to give interlayer [Al13O4(OH)2(H2O)12]7+ ions and pillared clays derived from such materials were poor catalysts, as was K306, a more drastically acid-treated bentonite- based commercial catalyst. Freeze-drying of the Al3+-clay before reaction to produce a more open, porous structure had no effect on its catalytic efficiency. The activation energy for the reaction of isobutene and methanol in dioxan was 44 kj/mole for an Al3+-clay catalyst compared with 25 kJ/mole for reactions catalyzed by Amberlyst 15. With no solvent (as in industrial processes), the rates of reaction were considerably slower for both the clay- and resin-catalyzed reactions. As has been found previously for resin-catalyzed reactions using stoichiometric amounts or an excess of methanol, the rate was proportional to the isobutene concentration, and the rate-determining step appeared to be protonation of the alkene. The performance of the Al3+-clay catalyst was increased by reducing the water content of the clay. In most reactions the clay catalysts were equilibrated at 12% relative humidity. Exposure of the clay to a low vacuum (10−1 torr) before use increased its catalytic activity from 50 to 60% of that of Amberlyst 15.
Participation in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) has numerous benefits, yet many eligible children remain unenrolled. This qualitative study sought to explore perceptions of a novel electronic health record (EHR) intervention to facilitate referrals to WIC and improve communication/coordination between WIC staff and healthcare professionals.
Methods:
WIC staff in three counties were provided EHR access and recruited to participate. An automated, EHR-embedded WIC participation screening and referral tool was implemented within 8 healthcare clinics; healthcare professionals within these clinics were eligible to participate. The interview guide was developed using the Consolidated Framework for Implementation Research to elicit perceptions of this novel EHR-based intervention. Semi-structured interviews were conducted via telephone. Interviews were recorded, transcribed, coded, and analyzed using thematic analysis.
Results:
Twenty semi-structured interviews were conducted with eight WIC staff, seven pediatricians, four medical assistants, and one registered nurse. Most participants self-identified as female (95%) and White (55%). We identified four primary themes: (1) healthcare professionals had a positive view of WIC but communication and coordination between WIC and healthcare professionals was limited prior to WIC having EHR access; (2) healthcare professionals favored WIC screening using the EHR but workflow challenges existed; (3) EHR connections between WIC and the healthcare system can streamline referrals to and enrollment in WIC; and (4) WIC staff and healthcare professionals recommended that WIC have EHR access.
Conclusions:
A novel EHR-based intervention has potential to facilitate healthcare referrals to WIC and improve communication/coordination between WIC and healthcare systems.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Non-motor symptoms, such as mild cognitive impairment and dementia, are an overwhelming cause of disability in Parkinson’s disease (PD). While subthalamic nucleus deep brain stimulation (STN DBS) is safe and effective for motor symptoms, declines in verbal fluency after bilateral DBS surgery have been widely replicated. However, little is known about cognitive outcomes following unilateral surgeries.
Participants and Methods:
We enrolled 31 PD patients who underwent unilateral STN-DBS in a randomized, cross-over, double-blind study (SUNDIAL Trial). Targets were chosen based on treatment of the most symptomatic side (n = 17 left hemisphere and 14 right hemisphere). All participants completed a neuropsychological battery (FAS/CFL, AVLT, DKEFS Color-Word Test) at baseline, then 2, 4, and 6 months post-surgery. Outcomes include raw scores for verbal fluency, immediate and delayed recall, and DKEFS Color-Word Inhibition trial (Trial 3) completion time. At 2, 4, and 6 months, the neurostimulation type (directional versus ring mode) was randomized for each participant. We compared baseline scores for all cognitive outcome measures using Welch’s two-sample t-tests and used linear mixed effects models to examine longitudinal effects of hemisphere and stimulation on cognition. This test battery was converted to a teleneuropsychology administration because of COVID-19 mid-study, and this was included as a covariate in all statistical models, along with years of education, baseline cognitive scores, and levodopa equivalent medication dose at each time point.
Results:
At baseline, patients who underwent left hemisphere implants scored lower on verbal fluency than right implants (t(20.66) = -2.49, p = 0.02). There were not significant differences between hemispheres in immediate recall (p = 0.57), delayed recall (p = 0.22), or response inhibition (p = 0.51). Post-operatively, left STN DBS patients experienced significant declines in verbal fluency over the study period (p = 0.02), while patients with right-sided stimulation demonstrated improvements (p < .001). There was no main effect of stimulation parameters (directional versus ring) on verbal fluency, memory, or inhibition, but there was a three-way interaction between time, stimulation parameters, and hemisphere on inhibition, such that left STN DBS patients receiving ring stimulation completed the inhibition trial faster (p = 0.035). After surgery, right STN DBS patients displayed faster inhibition times than patients with left implants (p = 0.015).
Conclusions:
Declines in verbal fluency after bilateral stimulation are the most commonly reported cognitive sequalae of DBS for movement disorders. Here we found group level declines in verbal fluency after unilateral left STN implants, but not right STN DBS up to 6 months after surgery. Patients with right hemisphere implants displayed improvements in verbal fluency. Compared to bilateral DBS, unilateral DBS surgery, particularly in the right hemisphere, is likely a modifiable risk factor for verbal fluency declines in patients with Parkinson’s disease.
White matter hyperintensity (WMH) burden is greater, has a frontal-temporal distribution, and is associated with proxies of exposure to repetitive head impacts (RHI) in former American football players. These findings suggest that in the context of RHI, WMH might have unique etiologies that extend beyond those of vascular risk factors and normal aging processes. The objective of this study was to evaluate the correlates of WMH in former elite American football players. We examined markers of amyloid, tau, neurodegeneration, inflammation, axonal injury, and vascular health and their relationships to WMH. A group of age-matched asymptomatic men without a history of RHI was included to determine the specificity of the relationships observed in the former football players.
Participants and Methods:
240 male participants aged 45-74 (60 unexposed asymptomatic men, 60 male former college football players, 120 male former professional football players) underwent semi-structured clinical interviews, magnetic resonance imaging (structural T1, T2 FLAIR, and diffusion tensor imaging), and lumbar puncture to collect cerebrospinal fluid (CSF) biomarkers as part of the DIAGNOSE CTE Research Project. Total WMH lesion volumes (TLV) were estimated using the Lesion Prediction Algorithm from the Lesion Segmentation Toolbox. Structural equation modeling, using Full-Information Maximum Likelihood (FIML) to account for missing values, examined the associations between log-TLV and the following variables: total cortical thickness, whole-brain average fractional anisotropy (FA), CSF amyloid ß42, CSF p-tau181, CSF sTREM2 (a marker of microglial activation), CSF neurofilament light (NfL), and the modified Framingham stroke risk profile (rFSRP). Covariates included age, race, education, APOE z4 carrier status, and evaluation site. Bootstrapped 95% confidence intervals assessed statistical significance. Models were performed separately for football players (college and professional players pooled; n=180) and the unexposed men (n=60). Due to differences in sample size, estimates were compared and were considered different if the percent change in the estimates exceeded 10%.
Results:
In the former football players (mean age=57.2, 34% Black, 29% APOE e4 carrier), reduced cortical thickness (B=-0.25, 95% CI [0.45, -0.08]), lower average FA (B=-0.27, 95% CI [-0.41, -.12]), higher p-tau181 (B=0.17, 95% CI [0.02, 0.43]), and higher rFSRP score (B=0.27, 95% CI [0.08, 0.42]) were associated with greater log-TLV. Compared to the unexposed men, substantial differences in estimates were observed for rFSRP (Bcontrol=0.02, Bfootball=0.27, 994% difference), average FA (Bcontrol=-0.03, Bfootball=-0.27, 802% difference), and p-tau181 (Bcontrol=-0.31, Bfootball=0.17, -155% difference). In the former football players, rFSRP showed a stronger positive association and average FA showed a stronger negative association with WMH compared to unexposed men. The effect of WMH on cortical thickness was similar between the two groups (Bcontrol=-0.27, Bfootball=-0.25, 7% difference).
Conclusions:
These results suggest that the risk factor and biological correlates of WMH differ between former American football players and asymptomatic individuals unexposed to RHI. In addition to vascular risk factors, white matter integrity on DTI showed a stronger relationship with WMH burden in the former football players. FLAIR WMH serves as a promising measure to further investigate the late multifactorial pathologies of RHI.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Female fertility is a complex trait with age-specific changes in spontaneous dizygotic (DZ) twinning and fertility. To elucidate factors regulating female fertility and infertility, we conducted a genome-wide association study (GWAS) on mothers of spontaneous DZ twins (MoDZT) versus controls (3273 cases, 24,009 controls). This is a follow-up study to the Australia/New Zealand (ANZ) component of that previously reported (Mbarek et al., 2016), with a sample size almost twice that of the entire discovery sample meta-analysed in the previous article (and five times the ANZ contribution to that), resulting from newly available additional genotyping and representing a significant increase in power. We compare analyses with and without male controls and show unequivocally that it is better to include male controls who have been screened for recent family history, than to use only female controls. Results from the SNP based GWAS identified four genomewide significant signals, including one novel region, ZFPM1 (Zinc Finger Protein, FOG Family Member 1), on chromosome 16. Previous signals near FSHB (Follicle Stimulating Hormone beta subunit) and SMAD3 (SMAD Family Member 3) were also replicated (Mbarek et al., 2016). We also ran the GWAS with a dominance model that identified a further locus ADRB2 on chr 5. These results have been contributed to the International Twinning Genetics Consortium for inclusion in the next GWAS meta-analysis (Mbarek et al., in press).
This is the fourth comprehensive assessment of the population status of all wild bird species in Europe. It identifies Species of European Conservation Concern (SPECs) so that action can be taken to improve their status. Species are categorised according to their global extinction risk, the size and trend of their European population and range, and Europe’s global responsibility for them. Of the 546 species assessed, 207 (38%) are SPECs: 74 (14%) of global concern (SPEC 1); 32 (6%) of European concern and concentrated in Europe (SPEC 2); and 101 (18%) of European concern but not concentrated in Europe (SPEC 3). The proportion of SPECs has remained similar (38–43%) across all four assessments since 1994, but the number of SPEC 1 species of global concern has trebled. The 44 species assessed as Non-SPECs in the third assessment (2017) but as SPECs here include multiple waders, raptors and passerines that breed in arctic, boreal or alpine regions, highlighting the growing importance of northern Europe and mountain ecosystems for bird conservation. Conversely, the 62 species assessed as SPECs in 2017 but as Non-SPECs here include various large waterbirds and raptors that are recovering due to conservation action. Since 1994, the number of specially protected species (listed on Annex I of the EU Birds Directive) qualifying as SPECs has fallen by 33%, while the number of huntable (Annex II) species qualifying as SPECs has risen by 56%. The broad patterns identified previously remain evident: 100 species have been classified as SPECs in all four assessments, including numerous farmland and steppe birds, ducks, waders, raptors, seabirds and long-distance migrants. Many of their populations are heavily depleted or continue to decline and/or contract in range. Europe still holds 3.4–5.4 billion breeding birds, but more action to halt and reverse losses is needed.