We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Current liver-stage Plasmodium falciparum models are complex, expensive and largely inaccessible, hindering research progress. Here, we show that a 3D liver spheroid model grown from immortalized HepG2/C3A cells supports the complete intrahepatocytic lifecycle of P. falciparum. Our results demonstrate sporozoite infection, development of exoerythrocytic forms and breakthrough infection into erythrocytes. The 3D-grown spheroid hepatocytes are structurally and functionally polarized, displaying enhanced albumin and urea production and increased expression of key metabolic enzymes, mimicking in vivo conditions – relative to 2D cultures. This accessible, reproducible model lowers barriers to malaria research, promoting advancements in fundamental biology and translational research.
By constraining organic carbon (OC) turnover times and ages, radiocarbon (14C) analysis has become a crucial tool to study the global carbon cycle. However, commonly used “bulk” measurements yield average turnover times, masking age variability within complex OC mixtures. One method to unravel intra-sample age distributions is ramped oxidation, in which OC is oxidized with the aid of oxygen at increasing temperatures. The resulting CO2 is collected over prescribed temperature ranges (thermal fractions) and analyzed for 14C content by accelerator mass spectrometry (AMS). However, all ramped oxidation instruments developed to date are operated in an “offline” configuration and require several manual preparation steps, hindering sample throughput and reproducibility. Here we describe a compact, online ramped oxidation (ORO) setup, where CO2 fractions are directly collected and transferred for 14C content measurement using an AMS equipped with a gas ion source. Our setup comprises two modules: (i) an ORO unit containing two sequential furnaces, the first of which holds the sample and is ramped from room temperature to ∼900°C, the second of which is maintained at 900°C and holds catalysts (copper oxide and silver) to ensure complete oxidation of evolved products to CO2; and (ii) a dual-trap interface (DTI) collection unit containing two parallel molecular sieve traps, which alternately collect CO2 from a given fraction and handle its direct injection into the AMS. Initial results for well-characterized samples indicate that 14C content uncertainties and blank background values are like those obtained during routine gas measurements at ETH, demonstrating the utility of the ORO-DTI setup.
Systematic reviews are essential for evidence-based health care, but conducting them is time- and resource-consuming. To date, efforts have been made to accelerate and (semi-)automate various steps of systematic reviews through the use of artificial intelligence (AI) and the emergence of large language models (LLMs) promises further opportunities. One crucial but complex task within systematic review conduct is assessing the risk of bias (RoB) of included studies. Therefore, the aim of this study was to test the LLM Claude 2 for RoB assessment of 100 randomized controlled trials, published in English language from 2013 onwards, using the revised Cochrane risk of bias tool (‘RoB 2’; involving judgements for five specific domains and an overall judgement). We assessed the agreement of RoB judgements by Claude with human judgements published in Cochrane reviews. The observed agreement between Claude and Cochrane authors ranged from 41% for the overall judgement to 71% for domain 4 (‘outcome measurement’). Cohen’s κ was lowest for domain 5 (‘selective reporting’; 0.10 (95% confidence interval (CI): −0.10–0.31)) and highest for domain 3 (‘missing data’; 0.31 (95% CI: 0.10–0.52)), indicating slight to fair agreement. Fair agreement was found for the overall judgement (Cohen’s κ: 0.22 (95% CI: 0.06–0.38)). Sensitivity analyses using alternative prompting techniques or the more recent version Claude 3 did not result in substantial changes. Currently, Claude’s RoB 2 judgements cannot replace human RoB assessment. However, the potential of LLMs to support RoB assessment should be further explored.
Previous studies identified clusters of first-episode psychosis (FEP) patients based on cognition and premorbid adjustment. This study examined a range of socio-environmental risk factors associated with clusters of FEP, aiming a) to compare clusters of FEP and community controls using the Maudsley Environmental Risk Score for psychosis (ERS), a weighted sum of the following risks: paternal age, childhood adversities, cannabis use, and ethnic minority membership; b) to explore the putative differences in specific environmental risk factors in distinguishing within patient clusters and from controls.
Methods
A univariable general linear model (GLS) compared the ERS between 1,263 community controls and clusters derived from 802 FEP patients, namely, low (n = 223) and high-cognitive-functioning (n = 205), intermediate (n = 224) and deteriorating (n = 150), from the EU-GEI study. A multivariable GLS compared clusters and controls by different exposures included in the ERS.
Results
The ERS was higher in all clusters compared to controls, mostly in the deteriorating (β=2.8, 95% CI 2.3 3.4, η2 = 0.049) and the low-cognitive-functioning cluster (β=2.4, 95% CI 1.9 2.8, η2 = 0.049) and distinguished them from the cluster with high-cognitive-functioning. The deteriorating cluster had higher cannabis exposure (meandifference = 0.48, 95% CI 0.49 0.91) than the intermediate having identical IQ, and more people from an ethnic minority (meandifference = 0.77, 95% CI 0.24 1.29) compared to the high-cognitive-functioning cluster.
Conclusions
High exposure to environmental risk factors might result in cognitive impairment and lower-than-expected functioning in individuals at the onset of psychosis. Some patients’ trajectories involved risk factors that could be modified by tailored interventions.
Research participants” feedback about their participation experiences offers critical insights for improving programs. A shared Empowering the Participant Voice (EPV) infrastructure enabled a multiorganization collaborative to collect, analyze, and act on participants’ feedback using validated participant-centered measures.
Methods:
A consortium of academic research organizations with Clinical and Translational Science Awards (CTSA) programs administered the Research Participant Perception Survey (RPPS) to active or recent research participants. Local response data also aggregated into a Consortium database, facilitating analysis of feedback overall and for subgroups.
Results:
From February 2022 to June 2024, participating organizations sent surveys to 28,096 participants and received 5045 responses (18%). Respondents were 60% female, 80% White, 13% Black, 2% Asian, and 6% Latino/x. Most respondents (85–95%) felt respected and listened to by study staff; 68% gave their overall experience the top rating. Only 60% felt fully prepared by the consent process. Consent, feeling valued, language assistance, age, study demands, and other factors were significantly associated with overall experience ratings. 63% of participants said that receiving a summary of the study results would be very important to joining a future study. Intersite scores differed significantly for some measures; initiatives piloted in response to local findings raised experience scores.
Conclusion:
RPPS results from 5045 participants from seven CTSAs provide a valuable evidence base for evaluating participants’ research experiences and using participant feedback to improve research programs. Analyses revealed opportunities for improving research practices. Sites piloting local change initiatives based on RPPS findings demonstrated measurable positive impact.
Nontyphoidal Salmonella enterica infections are a leading cause of enteric disease in Canada, most commonly associated with foodborne exposures. Raw frozen breaded chicken products (FBCP) have been implicated in 16 Salmonella outbreaks between 2017 and 2019. This study quantified the impact of the 1 April 2019 requirement by the Canadian Food Inspection Agency (CFIA) for manufacturers to reduce Salmonella in raw FBCP. An intervention study approach utilizing the pre–post intervention data with a comparison group methodology was used to: (1) estimate the reduction in FBCP Salmonella prevalence using retail meat FoodNet Canada data; (2) estimate the reduction in the human salmonellosis incidence rate using data from the Canadian National Enteric Surveillance Program; and (3) estimate the proportion of reported cases attributed to FBCP if the human exposure to Salmonella through FBCP was completely eliminated. The FBCP Salmonella prevalence decreased from 28% observed before 1 April 2019 to 2.9% after the requirement implementation. The CFIA requirement was estimated to reduce the human salmonellosis incidence rate by 23%. An estimated 26% of cases during the pre-intervention period can be attributed to FBCP. The CFIA requirement was successful at significantly reducing Salmonella prevalence in retail FBCP, and at reducing salmonellosis burden.
The New Jersey Kids Study (NJKS) is a transdisciplinary statewide initiative to understand influences on child health, development, and disease. We conducted a mixed-methods study of project planning teams to investigate team effectiveness and relationships between team dynamics and quality of deliverables.
Methods:
Ten theme-based working groups (WGs) (e.g., Neurodevelopment, Nutrition) informed protocol development and submitted final reports. WG members (n = 79, 75%) completed questionnaires including de-identified demographic and professional information and a modified TeamSTEPPS Team Assessment Questionnaire (TAQ). Reviewers independently evaluated final reports using a standardized tool. We analyzed questionnaire results and final report assessments using linear regression and performed constant comparative qualitative analysis to identify central themes.
Results:
WG-level factors associated with greater team effectiveness included proportion of full professors (β = 31.24, 95% CI 27.65–34.82), team size (β = 0.81, 95% CI 0.70–0.92), and percent dedicated research effort (β = 0.11, 95% CI 0.09–0.13); age distribution (β = −2.67, 95% CI –3.00 to –2.38) and diversity of school affiliations (β = –33.32, 95% CI –36.84 to –29.80) were inversely associated with team effectiveness. No factors were associated with final report assessments. Perceptions of overall initiative leadership were associated with expressed enthusiasm for future NJKS participation. Qualitative analyses of final reports yielded four themes related to team science practices: organization and process, collaboration, task delegation, and decision-making patterns.
Conclusions:
We identified several correlates of team effectiveness in a team science initiative's early planning phase. Extra effort may be needed to bridge differences in team members' backgrounds to enhance the effectiveness of diverse teams. This work also highlights leadership as an important component in future investigator engagement.
Neurodevelopmental delay is common in children who undergo surgery for Congenital Heart Disease (CHD) in infancy. Cardiac surgery associated acute kidney injury (CS-AKI) occurs frequently in the paediatric cardiac Intensive care unit (ICU). Cardiac surgery associated acute kidney injury (CS-AKI) is associated with worse neurodevelopmental scores and delay in cognitive, language, and motor domains in children with CHD. No known data exist regarding the association of CS-AKI and motor and language subscales. In this study, we explored the relationship between CS-AKI and receptive and expressive language, as well as gross and fine motor delay.
Methods:
This was a single centre retrospective observational cohort study. Children who underwent surgery for CHD and developed recurrent CS-AKI in the first year of life who had follow-up neurodevelopmental testing using the Bayley Scale of Infant Development Version III were included. Neurodevelopmental delay subscales assessed included: receptive and expressive language, fine and motor skills.
Results:
The study cohort included 203 children. Recurrent CS-AKI was significantly associated with lower scores in receptive and expressive language, as well as fine and gross motor on unadjusted analyses. On adjusted analyses, recurrent CS-AKI was significantly associated with severe receptive language delay.
Conclusion:
The independent association of recurrent CS-AKI with severe language delay in children who undergo surgery for CHD in infancy is novel. Our findings may contribute to the understanding of language impairment in this population. Further studies are required to better understand this relationship and any potentially modifiable factors.
During the COVID-19 pandemic, mental health problems increased as access to mental health services reduced. Recovery colleges are recovery-focused adult education initiatives delivered by people with professional and lived mental health expertise. Designed to be collaborative and inclusive, they were uniquely positioned to support people experiencing mental health problems during the pandemic. There is limited research exploring the lasting impacts of the pandemic on recovery college operation and delivery to students.
Aims
To ascertain how the COVID-19 pandemic changed recovery college operation in England.
Method
We coproduced a qualitative interview study of recovery college managers across the UK. Academics and co-researchers with lived mental health experience collaborated on conducting interviews and analysing data, using a collaborative thematic framework analysis.
Results
Thirty-one managers participated. Five themes were identified: complex organisational relationships, changed ways of working, navigating the rapid transition to digital delivery, responding to isolation and changes to accessibility. Two key pandemic-related changes to recovery college operation were highlighted: their use as accessible services that relieve pressure on mental health services through hybrid face-to-face and digital course delivery, and the development of digitally delivered courses for individuals with mental health needs.
Conclusions
The pandemic either led to or accelerated developments in recovery college operation, leading to a positioning of recovery colleges as a preventative service with wider accessibility to people with mental health problems, people under the care of forensic mental health services and mental healthcare staff. These benefits are strengthened by relationships with partner organisations and autonomy from statutory healthcare infrastructures.
For young children experiencing an illness, adequate nutrition is critical for recovery and to prevent malnutrition, yet many children do not receive the recommended quantities of food during illness and recuperation. Our research applied a behavioural science lens to identify drivers of feeding behaviours, including barriers inhibiting caregivers from following the feeding guidelines.
Design:
In 2021, we conducted qualitative research informed by the behavioural design process. Data from in-depth interviews and observations were analysed for themes.
Setting:
Research was conducted in South Kivu, Democratic Republic of the Congo.
Participants:
Research participants included caregivers of young children, other family members, health workers and other community members.
Results:
Five key findings about behavioural drivers emerged: (1) poverty and scarcity impose practical constraints and a cognitive and emotional burden on caregivers; (2) health providers are distracted and discouraged from counselling on feeding during sick visits; (3) a focus on quality and hesitations about quantity obscure benefits of feeding greater amounts of available foods; (4) perceptions of inappropriate foods limit caregivers’ choices; and (5) deference to a child’s limited appetite leads to missed opportunities to encourage them to eat.
Conclusions:
Each of these behavioural drivers is triggered by one or more addressable features in caregivers’ and health workers’ environment, suggesting concrete opportunities for programmes to support caregivers and health workers to improve feeding of young children during illness and recovery. In other settings where these features of the environment are similar, the insights and programming implications are likely to translate.
Alterations in cerebral blood flow (CBF) are associated with risk of cognitive decline and Alzheimer’s disease (AD). Although apolipoprotein E (APOE) ε4 and greater vascular risk burden have both been linked to reduced CBF in older adults, less is known about how APOE ε4 status and vascular risk may interact to influence CBF. We aimed to determine whether the effect of vascular risk on CBF varies by gene dose of APOE ε4 alleles (i.e., number of e4 alleles) in older adults without dementia.
Participants and Methods:
144 older adults without dementia from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) underwent arterial spin labeling (ASL) and T1-weighted MRI, APOE genotyping, fluorodeoxyglucose positron emission tomography (FDG-PET), lumbar puncture, and blood pressure assessment. Vascular risk was assessed using pulse pressure (systolic blood pressure -diastolic blood pressure), which is thought to be a proxy for arterial stiffening. Participants were classified by number of APOE ε4 alleles (n0 alleles = 87, m allele = 46, n2 alleles = 11). CBF in six FreeSurfer-derived a priori regions of interest (ROIs) vulnerable to AD were examined: entorhinal cortex, hippocampus, inferior temporal cortex, inferior parietal cortex, rostral middle frontal gyrus, and medial orbitofrontal cortex. Linear regression models tested the interaction between categorical APOE ε4 dose (0, 1, or 2 alleles) and continuous pulse pressure on CBF in each ROI, adjusting for age, sex, cognitive diagnosis (cognitively unimpaired vs. mild cognitive impairment), antihypertensive medication use, cerebral metabolism (FDG-PET composite), reference CBF region (precentral gyrus), and AD biomarker positivity defined using the ADNI-optimized phosphorylated tau/ß-amyloid ratio cut-off of > 0.0251 pg/ml.
Results:
A significant pulse pressure X APOE ε4 dose interaction was found on CBF in the entorhinal cortex, hippocampus, and inferior parietal cortex (ps < .005). Among participants with two e4 alleles, higher pulse pressure was significantly associated with lower CBF (ps < .001). However, among participants with zero or one ε4 allele, there was no significant association between pulse pressure and CBF (ps > .234). No significant pulse pressure X APOE ε4 dose interaction was found in the inferior temporal cortex, rostral middle frontal gyrus, or medial orbitofrontal cortex (ps > .109). Results remained unchanged when additionally controlling for general vascular risk assessed via the modified Hachinski Ischemic Scale.
Conclusions:
These findings demonstrate that the cross-sectional association between pulse pressure and region-specific CBF differs by APOE ε4 dose. In particular, a detrimental effect of elevated pulse pressure on CBF in AD-vulnerable regions was found only among participants with the e4/e4 genotype. Our findings suggest that pulse pressure may play a mechanistic role in neurovascular unit dysregulation for those genetically at greater risk for AD. Given that pulse pressure is just one of many potentially modifiable vascular risk factors for AD, future studies should seek to examine how these other factors (e.g., diabetes, high cholesterol) may interact with APOE genotype to affect cerebrovascular dysfunction.
Coercive measures such as involuntary psychiatric admission are considered a last resort in the treatment of people with psychiatric disorders. So far, numerous factors have been identified that influence their use. However, the link between a pandemic – in particular, restrictions such as lockdowns – and the use of involuntary psychiatric admission is unclear.
Aim
To examine the association between COVID-19 lockdowns and involuntary psychiatric admissions in Austria.
Method
This retrospective exploratory study assessed all involuntary psychiatric admissions and use of mechanical restraint in Austria, except for the federal state of Vorarlberg, between 1 January 2018 and 31 December 2020. Descriptive statistics and regression models were used.
Results
During the 3-year study period, 40 012 individuals (45.9% females, mean age 51.3 years) had 66 124 involuntary psychiatric admissions for an average of 10.9 days. Mechanical restraint was used during 33.9% of these admissions. In weeks of nationwide COVID-19 lockdowns (2020 v. 2018/2019), involuntary psychiatric admissions were significantly fewer (odds ratio = 0.93, P = 0.0001) but longer (11.6 (s.d.: 16) v. 10.9 (s.d.: 15.8) days). The likelihood of involuntary admission during lockdowns was associated with year (2020 v. 2018–2019; adjusted odds ratio = 0.92; P = 0.0002) but not with sex (P = 0.814), age (P = 0.310), use of mechanical restraint (P = 0.653) or type of ward (P = 0.843).
Conclusions
Restrictions such as lockdowns affect coercive measures and resulted in fewer but longer involuntary psychiatric admissions during weeks of lockdown in Austria. These results strengthen previous findings that showed the dependence of coercive measures on external factors, highlighting the need to further clarify causality and desired prevention effects when using coercive measures.
We identify a set of essential recent advances in climate change research with high policy relevance, across natural and social sciences: (1) looming inevitability and implications of overshooting the 1.5°C warming limit, (2) urgent need for a rapid and managed fossil fuel phase-out, (3) challenges for scaling carbon dioxide removal, (4) uncertainties regarding the future contribution of natural carbon sinks, (5) intertwinedness of the crises of biodiversity loss and climate change, (6) compound events, (7) mountain glacier loss, (8) human immobility in the face of climate risks, (9) adaptation justice, and (10) just transitions in food systems.
Technical summary
The Intergovernmental Panel on Climate Change Assessment Reports provides the scientific foundation for international climate negotiations and constitutes an unmatched resource for researchers. However, the assessment cycles take multiple years. As a contribution to cross- and interdisciplinary understanding of climate change across diverse research communities, we have streamlined an annual process to identify and synthesize significant research advances. We collected input from experts on various fields using an online questionnaire and prioritized a set of 10 key research insights with high policy relevance. This year, we focus on: (1) the looming overshoot of the 1.5°C warming limit, (2) the urgency of fossil fuel phase-out, (3) challenges to scale-up carbon dioxide removal, (4) uncertainties regarding future natural carbon sinks, (5) the need for joint governance of biodiversity loss and climate change, (6) advances in understanding compound events, (7) accelerated mountain glacier loss, (8) human immobility amidst climate risks, (9) adaptation justice, and (10) just transitions in food systems. We present a succinct account of these insights, reflect on their policy implications, and offer an integrated set of policy-relevant messages. This science synthesis and science communication effort is also the basis for a policy report contributing to elevate climate science every year in time for the United Nations Climate Change Conference.
Social media summary
We highlight recent and policy-relevant advances in climate change research – with input from more than 200 experts.
COVID-19 had the potential to dramatically increase public support for welfare. It was a time of apparent increased solidarity, of apparently deserving claimants, and of increasingly widespread exposure to the benefits system. However, there are also reasons to expect the opposite effect: an increase in financial strain fostering austerity and self-interest, and thermostatic responses to increasing welfare generosity. In this paper, we investigate the effects of the pandemic on attitudes towards working-age unemployment benefits in the UK using a unique combination of data sources: (i) temporally fine-grained data on attitudinal change over the course of the pandemic; and (ii) a novel nationally representative survey contrasting attitudes towards pandemic-era and pre-pandemic claimants (including analysis of free-text responses). Our results show that the pandemic prompted little change in UK welfare attitudes. However, we also find that COVID-era unemployment claimants were perceived as substantially more deserving than those claiming prior to the pandemic. This contrast suggests a strong degree of ‘COVID exceptionalism’ – with COVID claimants seen as categorically different from conventional claimants, muting the effect of the pandemic on welfare attitudes overall.
Background:Clostridioides difficile infection (CDI) is a serious healthcare-associated infection responsible for >12,000 US deaths annually. Overtesting can lead to antibiotic overuse and potential patient harm when patients are colonized with C. difficile, but not infected, yet treated. National guidelines recommend when testing is appropriate; occasionally, guideline-noncompliant testing (GNCT) may be warranted. A multidisciplinary group at UNC Medical Center (UNCMC) including the antimicrobial stewardship program (ASP) used a best-practice alert in 2020 to improve diagnostic stewardship, to no effect. Evidence supports use of hard stops for this purpose, though less is known about provider acceptance. Methods: Beginning in May 2022, UNCMC implemented a hard stop in its electronic medical record system (EMR) for C. difficile GNCT orders, with exceptions to be approved by an ASP attending physician. Requests were retrospectively reviewed May–November 2022 to monitor for adverse patient outcomes and provider hard-stop compliance. The team exported data from the EMR (Epic Systems) and generated descriptive statistics in Microsoft Excel. Results: There were 85 GNCT orders during the study period. Most tests (62%) were reviewed by the ASP, and 38% sought non-ASP or no approval. Of the tests reviewed by the ASP, 33 (62%) were approved and 20 (38%) were not. Among tests not approved by the ASP, no patients subsequently received CDI-directed antibiotics, and 1 patient (5%) warranted same-admission CDI testing (negative). Of tests that circumvented ASP review, 18 (56%) ordering providers received a follow-up email from an associate chief medical officer to determine the rationale. No single response type dominated: 3 (17%) were unaware of the ASP review requirement, 2 (11%) indicated their patient’s uncharted refusal of laxatives, 2 (11%) indicated another patient-specific reason. Provider avoidance of the ASP approval mechanism decreased 38%, from 53% of noncompliant tests in month 1 to 33% of tests in month 6. Total tests orders dropped 15.5% from 1,129 during the same period in 2021 to 954 during the study period (95% CI, 13.4%–17.7%). Compliance with the guideline component requiring at least a 48-hour laxative-free interval prior to CDI testing increased from 85% (95% CI, 83%–87%) to 95% (95% CI, 93%–96%). CDI incidence rates decreased from 0.52 per 1,000 patient days (95% CI, 0.41–0.65) to 0.41 (95% CI, 0.32–0.53), though the change was neither significant at P = .05 nor attributable to any 1 intervention. Conclusions: Over time and with feedback to providers circumventing the exception process, providers accepted and used the hard stop, improving diagnostic stewardship and avoiding unneeded treatment.
Psychotropic medication efficacy and tolerability are critical treatment issues faced by individuals with psychiatric disorders and their healthcare providers. For some people, it can take months to years of a trial-and-error process to identify a medication with the ideal efficacy and tolerability profile. Current strategies (e.g. clinical practice guidelines, treatment algorithms) for addressing this issue can be useful at the population level, but often fall short at the individual level. This is, in part, attributed to interindividual variation in genes that are involved in pharmacokinetic (i.e. absorption, distribution, metabolism, elimination) and pharmacodynamic (e.g. receptors, signaling pathways) processes that in large part, determine whether a medication will be efficacious or tolerable. A precision prescribing strategy know as pharmacogenomics (PGx) assesses these genomic variations, and uses it to inform selection and dosing of certain psychotropic medications. In this review, we describe the path that led to the emergence of PGx in psychiatry, the current evidence base and implementation status of PGx in the psychiatric clinic, and finally, the future growth potential of precision psychiatry via the convergence of the PGx-guided strategy with emerging technologies and approaches (i.e. pharmacoepigenomics, pharmacomicrobiomics, pharmacotranscriptomics, pharmacoproteomics, pharmacometabolomics) to personalize treatment of psychiatric disorders.
Background: Inappropriate broad-spectrum antibiotic use targeting methicillin-resistant Staphylococcus aureus (MRSA) and Pseudomonas aeruginosa can result in increased adverse events, antibiotic resistance, and Clostridioides difficile infection. In 2019, revised ATS/IDSA community-acquired pneumonia (CAP) guidelines removed healthcare-associated pneumonia (HCAP) as a clinical entity and modified patient factors warranting empiric broad-spectrum antibiotic (BSA) use. As a result, most patients hospitalized with CAP should receive empiric antibiotics targeting standard CAP pathogens. Based on revised guidelines, we evaluated predictors and outcomes associated with inappropriate BSA use among hospitalized patients with CAP. Methods: Between November 2019 and July 2022, trained abstractors collected data on non-ICU adult medical patients admitted with CAP at 67 Michigan hospitals who received either an inappropriate empiric BSA on hospital day 1 or 2 or a standard CAP regimen. Inappropriate empiric BSA use was defined as use of an anti-MRSA or anti-pseudomonal antibiotic in a patient eligible for standard CAP coverage per IDSA guidelines. Patients with immune compromise, moderate or severe chronic obstructive pulmonary disease (COPD), pulmonary complication, or guideline-concordant treatment with BSA were excluded. Data collected included comorbidities, antibiotic use and hospitalizations in the preceding 90 days, cultures in the preceding year, signs or symptoms of pneumonia, hospital characteristics, and 30-day postdischarge patient outcomes. Data were collected through chart review and patient phone calls. Predictors of inappropriate empiric BSA were evaluated using logistic general estimating equation (GEE) models, accounting for hospital-level clustering. We assessed the effect of inappropriate empiric BSA (vs standard CAP therapy) on 30-day patient outcomes using logistic GEE models controlling for predictors associated with the outcome and probability of treatment. Results: Of 8,286 included patients with CAP, 2,215 (26.7%) were empirically treated with inappropriate BSA. The median BSA treatment was 3 days (IQR, 2.5). After adjustments, factors associated with inappropriate empiric BSA treatment included hospitalization or treatment with high-risk antibiotics in preceding 90 days, transfer from a postacute care facility, hemodialysis, support with ≥3 L supplemental oxygen, severe sepsis, leukocytosis, and higher pneumonia severity index (Fig. 1). After adjustments, patients with inappropriate empiric BSA treatment had higher readmissions 30 days after discharge, more transfers to the intensive care unit, more antibiotic-associated adverse events, and longer hospitalizations (Fig. 2). Conclusions: Patients hospitalized with CAP often received inappropriate BSA as empiric coverage, and this inappropriate antibiotic selection was associated with worse patient outcomes. To improve patient outcomes, stewardship efforts should focus on reducing inappropriate BSA use in patients hospitalized for CAP with historic HCAP risk factors or severe CAP without other guideline-directed indications for BSA.
Financial support. H.M.S. initiative is underwritten by Blue Cross and Blue Shield of Michigan.
This article aims to explore the ethical issues arising from attempts to diversify genomic data and include individuals from underserved groups in studies exploring the relationship between genomics and health. We employed a qualitative synthesis design, combining data from three sources: 1) a rapid review of empirical articles published between 2000 and 2022 with a primary or secondary focus on diversifying genomic data, or the inclusion of underserved groups and ethical issues arising from this, 2) an expert workshop and 3) a narrative review. Using these three sources we found that ethical issues are interconnected across structural factors and research practices. Structural issues include failing to engage with the politics of knowledge production, existing inequities, and their effects on how harms and benefits of genomics are distributed. Issues related to research practices include a lack of reflexivity, exploitative dynamics and the failure to prioritise meaningful co-production. Ethical issues arise from both the structure and the practice of research, which can inhibit researcher and participant opportunities to diversify data in an ethical way. Diverse data are not ethical in and of themselves, and without being attentive to the social, historical and political contexts that shape the lives of potential participants, endeavours to diversify genomic data run the risk of worsening existing inequities. Efforts to construct more representative genomic datasets need to develop ethical approaches that are situated within wider attempts to make the enterprise of genomics more equitable.
Misdiagnosis of bacterial pneumonia increases risk of exposure to inappropriate antibiotics and adverse events. We developed a diagnosis calculator (https://calculator.testingwisely.com) to inform clinical diagnosis of community-acquired bacterial pneumonia using objective indicators, including incidence of disease, risk factors, and sensitivity and specificity of diagnostic tests, that were identified through literature review.