We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The First Large Absorption Survey in H i (FLASH) is a large-area radio survey for neutral hydrogen in and around galaxies in the intermediate redshift range $0.4\lt z\lt1.0$, using the 21-cm H i absorption line as a probe of cold neutral gas. The survey uses the ASKAP radio telescope and will cover 24,000 deg$^2$ of sky over the next five years. FLASH breaks new ground in two ways – it is the first large H i absorption survey to be carried out without any optical preselection of targets, and we use an automated Bayesian line-finding tool to search through large datasets and assign a statistical significance to potential line detections. Two Pilot Surveys, covering around 3000 deg$^2$ of sky, were carried out in 2019-22 to test and verify the strategy for the full FLASH survey. The processed data products from these Pilot Surveys (spectral-line cubes, continuum images, and catalogues) are public and available online. In this paper, we describe the FLASH spectral-line and continuum data products and discuss the quality of the H i spectra and the completeness of our automated line search. Finally, we present a set of 30 new H i absorption lines that were robustly detected in the Pilot Surveys, almost doubling the number of known H i absorption systems at $0.4\lt z\lt1$. The detected lines span a wide range in H i optical depth, including three lines with a peak optical depth $\tau\gt1$, and appear to be a mixture of intervening and associated systems. Interestingly, around two-thirds of the lines found in this untargeted sample are detected against sources with a peaked-spectrum radio continuum, which are only a minor (5–20%) fraction of the overall radio-source population. The detection rate for H i absorption lines in the Pilot Surveys (0.3 to 0.5 lines per 40 deg$^2$ ASKAP field) is a factor of two below the expected value. One possible reason for this is the presence of a range of spectral-line artefacts in the Pilot Survey data that have now been mitigated and are not expected to recur in the full FLASH survey. A future paper in this series will discuss the host galaxies of the H i absorption systems identified here.
Population-wide restrictions during the COVID-19 pandemic may create barriers to mental health diagnosis. This study aims to examine changes in the number of incident cases and the incidence rates of mental health diagnoses during the COVID-19 pandemic.
Methods
By using electronic health records from France, Germany, Italy, South Korea and the UK and claims data from the US, this study conducted interrupted time-series analyses to compare the monthly incident cases and the incidence of depressive disorders, anxiety disorders, alcohol misuse or dependence, substance misuse or dependence, bipolar disorders, personality disorders and psychoses diagnoses before (January 2017 to February 2020) and after (April 2020 to the latest available date of each database [up to November 2021]) the introduction of COVID-related restrictions.
Results
A total of 629,712,954 individuals were enrolled across nine databases. Following the introduction of restrictions, an immediate decline was observed in the number of incident cases of all mental health diagnoses in the US (rate ratios (RRs) ranged from 0.005 to 0.677) and in the incidence of all conditions in France, Germany, Italy and the US (RRs ranged from 0.002 to 0.422). In the UK, significant reductions were only observed in common mental illnesses. The number of incident cases and the incidence began to return to or exceed pre-pandemic levels in most countries from mid-2020 through 2021.
Conclusions
Healthcare providers should be prepared to deliver service adaptations to mitigate burdens directly or indirectly caused by delays in the diagnosis and treatment of mental health conditions.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Patients with bipolar disorder (BPD) are prone to engage in risk-taking behaviours and self-harm, contributing to higher risk of traumatic injuries requiring medical attention at the emergency room (ER).We hypothesize that pharmacological treatment of BPD could reduce the risk of traumatic injuries by alleviating symptoms but evidence remains unclear. This study aimed to examine the association between pharmacological treatment and the risk of ER admissions due to traumatic injuries.
Methods
Individuals with BPD who received mood stabilizers and/or antipsychotics were identified using a population-based electronic healthcare records database in Hong Kong (2001–2019). A self-controlled case series design was applied to control for time-invariant confounders.
Results
A total of 5040 out of 14 021 adults with BPD who received pharmacological treatment and had incident ER admissions due to traumatic injuries from 2001 to 2019 were included. An increased risk of traumatic injuries was found 30 days before treatment [incidence rate ratio (IRR) 4.44 (3.71–5.31), p < 0.0001]. After treatment initiation, the risk remained increased with a smaller magnitude, before returning to baseline [IRR 0.97 (0.88–1.06), p = 0.50] during maintenance treatment. The direct comparison of the risk during treatment to that before and after treatment showed a significant decrease. After treatment cessation, the risk was increased [IRR 1.34 (1.09–1.66), p = 0.006].
Conclusions
This study supports the hypothesis that pharmacological treatment of BPD was associated with a lower risk of ER admissions due to traumatic injuries but an increased risk after treatment cessation. Close monitoring of symptoms relapse is recommended to clinicians and patients if treatment cessation is warranted.
To develop a pediatric research agenda focused on pediatric healthcare-associated infections and antimicrobial stewardship topics that will yield the highest impact on child health.
Participants:
The study included 26 geographically diverse adult and pediatric infectious diseases clinicians with expertise in healthcare-associated infection prevention and/or antimicrobial stewardship (topic identification and ranking of priorities), as well as members of the Division of Healthcare Quality and Promotion at the Centers for Disease Control and Prevention (topic identification).
Methods:
Using a modified Delphi approach, expert recommendations were generated through an iterative process for identifying pediatric research priorities in healthcare associated infection prevention and antimicrobial stewardship. The multistep, 7-month process included a literature review, interactive teleconferences, web-based surveys, and 2 in-person meetings.
Results:
A final list of 12 high-priority research topics were generated in the 2 domains. High-priority healthcare-associated infection topics included judicious testing for Clostridioides difficile infection, chlorhexidine (CHG) bathing, measuring and preventing hospital-onset bloodstream infection rates, surgical site infection prevention, surveillance and prevention of multidrug resistant gram-negative rod infections. Antimicrobial stewardship topics included β-lactam allergy de-labeling, judicious use of perioperative antibiotics, intravenous to oral conversion of antimicrobial therapy, developing a patient-level “harm index” for antibiotic exposure, and benchmarking and or peer comparison of antibiotic use for common inpatient conditions.
Conclusions:
We identified 6 healthcare-associated infection topics and 6 antimicrobial stewardship topics as potentially high-impact targets for pediatric research.
We studied the timing of occurrence of 1676 sporadic, community-acquired cases of Legionnaires' disease in England and Wales between 1993 and 2008, in relation to temperature, relative humidity, rainfall, windspeed and ultraviolet light using a fixed-stratum case-crossover approach. The analysis was conducted using conditional logistic regression, with consideration of appropriate lag periods. There was evidence of an association between the risk of Legionnaires' disease and temperature with an apparently long time lag of 1–9 weeks [odds of disease at 95th vs. 75th centiles: 3·91, 95% confidence interval (CI) 2·06–7·40], and with rainfall at short time lags (of 2–10 days) (odds of disease at 75th vs. 50th centiles: 1·78, 95% CI 1·50–2·13). There was some evidence that the risk of disease in relation to high temperatures was greater at high relative humidities. A higher risk of Legionnaires' disease may be indicated by preceding periods of warmer wetter weather.
Edited by
Alex S. Evers, Washington University School of Medicine, St Louis,Mervyn Maze, University of California, San Francisco,Evan D. Kharasch, Washington University School of Medicine, St Louis
This study examined the impact of meteorological conditions on sporadic, community-acquired cases of Legionnaires' disease in England and Wales (2003–2006), with reference to the 2006 increase in cases. A case-crossover methodology compared each case with self-controlled data using a conditional logistic regression analysis. Effect modification by quarter and year was explored. In total, 674 cases were entered into the dataset and two meteorological variables were selected for study based on preliminary analyses: relative humidity during a case's incubation period, and temperature during the 10–14 weeks preceding onset. For the quarter July–September there was strong evidence to suggest a year, humidity and temperature interaction (Wald χ2=30·59, 3 d.f., P<0·0001). These findings have implications for future case numbers and resource requirements.
This paper provides one of the first assessments of the burden of both the public health investigation and the economic costs associated with an apparent outbreak of Legionnaires' disease (LD) in South East London. In addition to epidemiological, microbiological and environmental investigations, we collected data on the staff time and resources committed by the 11 main organizations responsible for managing the outbreak. Of the overall estimated costs of £455 856, only 14% (£64 264) was spent on investigation and control of the outbreak compared with 86% (£391 592) spent on the hospital treatment of the patients. The time and money spent on public health services in this investigation appear to represent good value for money considering the potential costs of a major outbreak, including the high case-fatality rate in LD generally and the high health-care costs. Further research is needed to determine optimum strategies for the cost-effective use of health system resources in investigations of LD. Whether the threshold for investigation of cases should be based on observed incidence rates or the cost-effectiveness of investigations, or both, should be debated further.
Eight cases of Legionnaires' disease were identified among the 215 German passengers after a cruise to the Nordic Sea in August 2003. An unmatched case-control study was conducted to identify risk factors and the source of infection. In total, eight passengers fulfilled the case definition, one of these died. Forty-two passengers served as controls. The attack rate was 4%. The mean age was 60 years for cases and 62 years for controls. Prolonged exposure to the spa pool seemed to be a risk factor of infection (OR 4·85, P=0·09). Legionella pneumophila serogroup 1, monoclonal antibody (mAb) subgroup ‘Knoxville’ was isolated from clinical and environmental samples. DNA sequence-based typing revealed that these isolates were indistinguishable from each other. The investigation showed the importance of an interdisciplinary approach of microbiology and epidemiology as not all sites on the ship that tested positive for L. pneumophila actually posed a relevant risk for the passengers.