We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Antibiotic stewardship programs (ASPs) target hospitalized children, but most do not routinely review antibiotic prescriptions at discharge, despite 30% of discharged children receiving additional antibiotics. Our objective is to describe discharge antibiotic prescribing in children hospitalized for uncomplicated community-acquired pneumonia (CAP), skin/soft tissue infection (SSTI), and urinary tract infection (UTI).
Design:
Retrospective cohort study.
Setting:
Four academic children’s hospitals with established ASPs.
Patients:
ICD-10 codes identified 3,847 encounters for children <18 years admitted from January 1, 2021 to December 31, 2021 and prescribed antibiotics at discharge for uncomplicated CAP, SSTI, or UTI. After excluding children with medical complexity and encounters with concomitant infections, >7 days hospital stay, or intensive care unit stay, 1,206 encounters were included.
Methods:
Primary outcomes were the percentage of subjects prescribed optimal (1) total (inpatient plus outpatient) duration of therapy (DOT) and (2) antibiotic choice based on current national guidelines and available evidence.
Results:
Of 226 encounters for CAP, 417 for UTI, and 563 for SSTI, the median age was 4 years, 52% were female, and the median DOT was 9 days (8 for CAP, 10 for UTI, and 9 for SSTI). Antibiotic choice was optimal for 77%, and DOT was optimal for 26%. Only 20% of antibiotic courses included both optimal DOT and antibiotic choice.
Conclusions:
At 4 children’s hospitals with established ASPs, 80% of discharge antibiotic courses for CAP, UTI, and SSTI were suboptimal either by choice of antibiotic or DOT. Discharge antibiotic prescribing represents an opportunity to improve antibiotic use in children.
How does a politician’s gender shape citizen responses to performance in office? Much of the existing literature suggests that voters hold higher expectations of women politicians and are more likely to punish them for malfeasance. An alternative perspective suggests that voters view men politicians as more agentic and are, therefore, more responsive to their performance, whether good or bad. Using an online survey experiment in Argentina, we randomly assign respondents to information that the distribution of a government food programme in a hypothetical city is biased or unbiased, and we also randomly assign the gender of the mayor. We find that respondents are more responsive to performance information – both positive and negative – about men mayors. We find little evidence that respondents hold different expectations of malfeasance by men versus women politicians. These results contribute to our understanding of how citizens process performance information in a context with few women politicians.
To clarify incidence, progression and effect on quality of life of shoulder/neck disability, oral asymmetry, neuropathic pain and numbness following neck dissection.
Methods
This prospective telephone-interview study delivered the Neck Dissection Impairment Index, Neuropathic Pain Questionnaire, House–Brackmann Scale and questions assessing numbness to patients before and three times after neck dissection.
Results
Mean Neck Dissection Impairment Index (6.43 vs 22.17; p = 0.004) and Neuropathic Pain Questionnaire scores (0.76 vs 2.30; p = 0.004), proportions of patients with oral asymmetry (3 per cent vs 33.3 per cent; p = 0.016), ear (5.9 per cent vs 46.7 per cent; p = 0.002), jaw (5.9 per cent vs 53.3 per cent; p < 0.001) and neck numbness (5.9 per cent vs 53.3 per cent; p < 0.001) each increased significantly from pre-operation versus 12 weeks after. Neuropathic pain diagnoses did not reach significance. No outcome returned to baseline and progression of each was illustrated over time.
Conclusion
The findings demonstrated that these complications are common and persist throughout short-term recovery. Screening to identify and manage complications could improve post-operative care.
Vaccines have revolutionised the field of medicine, eradicating and controlling many diseases. Recent pandemic vaccine successes have highlighted the accelerated pace of vaccine development and deployment. Leveraging this momentum, attention has shifted to cancer vaccines and personalised cancer vaccines, aimed at targeting individual tumour-specific abnormalities. The UK, now regarded for its vaccine capabilities, is an ideal nation for pioneering cancer vaccine trials. This article convened experts to share insights and approaches to navigate the challenges of cancer vaccine development with personalised or precision cancer vaccines, as well as fixed vaccines. Emphasising partnership and proactive strategies, this article outlines the ambition to harness national and local system capabilities in the UK; to work in collaboration with potential pharmaceutic partners; and to seize the opportunity to deliver the pace for rapid advances in cancer vaccine technology.
Approximately five million people live with diabetes in the UK. The cost of this is approximately 10 percent of the National Health Service (NHS) budget. Wales has the highest prevalence of diabetes of any country in the UK. Educating people on how to best manage their condition can minimize associated complications. Digital platforms can aid self-management and improve risk factors.
Methods
This rapid review aimed to address the following research question: What is the clinical and cost effectiveness of digital platforms for personalized diabetes management to inform decision-making and guidance in the NHS? Digital platforms for this rapid review can be driven using artificial intelligence, machine learning, or through the application of data rules. Clinical evidence published since 2008 on health economics and patient, carer, and family perspectives relevant to Wales was identified by searching relevant databases such as MEDLINE. One relevant economic analysis was conducted using the UK Prospective Diabetes Study Outcomes Model 2.
Results
Outcomes included improvements in glycemic control, healthcare resource use (e.g., number of total general practitioner and emergency department visits per year), reduction in body weight among participants, reduction in cholesterol levels, and positive patient-reported outcome measures. An economic analysis identified in the literature review found that a digital platform was more effective and less costly than routine diabetes care and was, therefore, dominant. The analysis was based on observed reductions in glycosylated hemoglobin levels from a database of people with diabetes in NHS Scotland.
Conclusions
The evidence suggests there are benefits in using digital platforms to aid self-management among people with diabetes. Studies reporting on glycosylated hemoglobin levels found statistically significant and clinically important benefits from using digital platforms. Digital platforms also have the potential to be more effective and less costly than routine diabetes care in Wales and the UK.
England's primary care service for psychological therapy (Improving Access to Psychological Therapies [IAPT]) treats anxiety and depression, with a target recovery rate of 50%. Identifying the characteristics of patients who achieve recovery may assist in optimizing future treatment. This naturalistic cohort study investigated pre-therapy characteristics as predictors of recovery and improvement after IAPT therapy.
Methods
In a cohort of patients attending an IAPT service in South London, we recruited 263 participants and conducted a baseline interview to gather extensive pre-therapy characteristics. Bayesian prediction models and variable selection were used to identify baseline variables prognostic of good clinical outcomes. Recovery (primary outcome) was defined using (IAPT) service-defined score thresholds for both depression (Patient Health Questionnaire [PHQ-9]) and anxiety (Generalized Anxiety Disorder [GAD-7]). Depression and anxiety outcomes were also evaluated as standalone (PHQ-9/GAD-7) scores after therapy. Prediction model performance metrics were estimated using cross-validation.
Results
Predictor variables explained 26% (recovery), 37% (depression), and 31% (anxiety) of the variance in outcomes, respectively. Variables prognostic of recovery were lower pre-treatment depression severity and not meeting criteria for obsessive compulsive disorder. Post-therapy depression and anxiety severity scores were predicted by lower symptom severity and higher ratings of health-related quality of life (EuroQol questionnaire [EQ5D]) at baseline.
Conclusion
Almost a third of the variance in clinical outcomes was explained by pre-treatment symptom severity scores. These constructs benefit from being rapidly accessible in healthcare services. If replicated in external samples, the early identification of patients who are less likely to recover may facilitate earlier triage to alternative interventions.
Childhood bullying is a public health priority. We evaluated the effectiveness and costs of KiVa, a whole-school anti-bullying program that targets the peer context.
Methods
A two-arm pragmatic multicenter cluster randomized controlled trial with embedded economic evaluation. Schools were randomized to KiVa-intervention or usual practice (UP), stratified on school size and Free School Meals eligibility. KiVa was delivered by trained teachers across one school year. Follow-up was at 12 months post randomization. Primary outcome: student-reported bullying-victimization; secondary outcomes: self-reported bullying-perpetration, participant roles in bullying, empathy and teacher-reported Strengths and Difficulties Questionnaire. Outcomes were analyzed using multilevel linear and logistic regression models.
Findings
Between 8/11/2019–12/02/2021, 118 primary schools were recruited in four trial sites, 11 111 students in primary analysis (KiVa-intervention: n = 5944; 49.6% female; UP: n = 5167, 49.0% female). At baseline, 21.6% of students reported being bullied in the UP group and 20.3% in the KiVa-intervention group, reducing to 20.7% in the UP group and 17.7% in the KiVa-intervention group at follow-up (odds ratio 0.87; 95% confidence interval 0.78 to 0.97, p value = 0.009). Students in the KiVa group had significantly higher empathy and reduced peer problems. We found no differences in bullying perpetration, school wellbeing, emotional or behavioral problems. A priori subgroup analyses revealed no differences in effectiveness by socioeconomic gradient, or by gender. KiVa costs £20.78 more per pupil than usual practice in the first year, and £1.65 more per pupil in subsequent years.
Interpretation
The KiVa anti-bullying program is effective at reducing bullying victimization with small-moderate effects of public health importance.
Funding
The study was funded by the UK National Institute for Health and Care Research (NIHR) Public Health Research program (17-92-11). Intervention costs were funded by the Rayne Foundation, GwE North Wales Regional School Improvement Service, Children's Services, Devon County Council and HSBC Global Services (UK) Ltd.
A surveillance system for measuring patient-level antimicrobial adverse drug events (ADE) may support stewardship activities, however, design and implementation questions remain. In this national survey, stewardship experts favored simple, laboratory-based ADE definitions although there were tensions between feasibility, ability to identify attribution without chart review, and importance of specific ADE.
A Nebraska statewide webinar series was initiated during the coronavirus disease 2019 (COVID-19) pandemic for long-term care (LTC) and acute care/outpatient (AC) facilities. An impact survey was completed by 48 of 96 AC and 109 of 429 LTC facilities. The majority reported increased regulatory awareness (AC: 65%, LTC: 54%) and updated COVID-19 (AC: 61%, LTC: 69%) and general infection prevention (AC: 61%, LTC: 60%) policies.
‘Inhalants’ have been associated with poorer mental health in adolescence, but little is known of associations with specific types of inhalants.
Aims
We aimed to investigate associations of using volatile substances, nitrous oxide and alkyl nitrates with mental health problems in adolescence.
Method
We conducted a cross-sectional analysis using data from 13- to 14-year-old adolescents across England and Wales collected between September 2019 and March 2020. Multilevel logistic regression examined associations between lifetime use of volatile substances, nitrous oxide and alkyl nitrates with self-reported symptoms of probable depression, anxiety, conduct disorder and auditory hallucinations.
Results
Of the 6672 adolescents in the study, 5.1% reported use of nitrous oxide, 4.9% volatile solvents and 0.1% alkyl nitrates. After accounting for multiple testing, adolescents who had used volatile solvents were significantly more likely to report probable depressive (odds ratio = 4.59, 95% CI 3.58, 5.88), anxiety (odds ratio = 3.47, 95% CI 2.72, 4.43) or conduct disorder (odds ratio = 7.52, 95% CI 5.80, 9.76) and auditory hallucinations (odds ratio = 5.35, 95% CI 4.00, 7.17) than those who had not. Nitrous oxide use was significantly associated with probable depression and conduct disorder but not anxiety disorder or auditory hallucinations. Alkyl nitrate use was rare and not associated with mental health outcomes. Adjustment for use of other inhalants, tobacco and alcohol resulted in marked attenuation but socioeconomic disadvantage had little effect.
Conclusion
To our knowledge, this study provides the first general population evidence that volatile solvents and nitrous oxide are associated with probable mental health disorders in adolescence. These findings require replication, ideally with prospective designs.
College student food insecurity (FI) is a public health concern. Programming and policies to support students have expanded but utilisation is often limited. The aim of this study was to summarise the barriers to accessing college FI programming guided by the social ecological model (SEM) framework. A scoping review of peer-reviewed literature included an electronic search conducted in MEDLINE, ERIC, and PubMed databases, with a secondary search in Google Scholar. Of the 138 articles identified, 18 articles met eligibility criteria and were included. Articles primarily encompassed organisational (17/18) level barriers, followed by individual (15/18), relationship (15/18), community (9/18), and policy (6/18) levels. Individual barriers included seven themes: Knowledge of Process, Awareness, Limited Time or Schedules, Personal Transportation, Internal Stigma, Perception of Need, and Type of Student. Four relationship barriers were identified: External Stigma, Comparing Need, Limited Availability Causes Negative Perceptions, and Staff. Ten barrier themes comprised the organisational level: Application Process, Operational Process, Location, Hours of Operation, Food Quality, Food Quantity, Food Desirability or Variety of Food, Marketing Materials, Awareness of the Program, and COVID-19 Restrictions. Two barrier themes were identified at the community level, Public Transportation and Awareness of SNAP, while one barrier theme, SNAP Eligibility and Process, encompassed the policy level. Higher education stakeholders should seek to overcome these barriers to the use of food programmes as a means to address the issue of college FI. This review offers recommendations to overcome these barriers at each SEM level.
To describe neutropenic fever management practices among healthcare institutions.
Design:
Survey.
Participants:
Members of the Society for Healthcare Epidemiology of America Research Network (SRN) representing healthcare institutions within the United States.
Methods:
An electronic survey was distributed to SRN representatives, with questions pertaining to demographics, antimicrobial prophylaxis, supportive care, and neutropenic fever management. The survey was distributed from fall 2022 through spring 2023.
Results:
40 complete responses were recorded (54.8% response rate), with respondent institutions accounting for approximately 15.7% of 2021 US hematologic malignancy hospitalizations and 14.9% of 2020 US bone marrow transplantations. Most entities have institutional guidelines for neutropenic fever management (35, 87.5%) and prophylaxis (31, 77.5%), and first-line treatment included IV antipseudomonal antibiotics (35, 87.5% cephalosporin; 5, 12.5% penicillin; 0, 0% carbapenem).
We observed significant heterogeneity in treatment course decisions, with roughly half (18, 45.0%) of respondents continuing antibiotics until neutrophil recovery, while the remainder having criteria for de-escalation prior to neutrophil recovery. Respondents were more willing to de-escalate prior to neutrophil recovery in patients with identified clinical (27, 67.5% with pneumonia) or microbiological (30, 75.0% with bacteremia) sources after dedicated treatment courses.
Conclusions:
We found substantial variation in the practice of de-escalation of empiric antibiotics relative to neutrophil recovery, highlighting a need for more robust evidence for and adoption of this practice. No respondents use carbapenems as first-line therapy, comparing favorably to prior survey studies conducted in other countries.
During the coronavirus disease 2019 pandemic, mathematical modeling has been widely used to understand epidemiological burden, trends, and transmission dynamics, to facilitate policy decisions, and, to a lesser extent, to evaluate infection prevention and control (IPC) measures. This review highlights the added value of using conventional epidemiology and modeling approaches to address the complexity of healthcare-associated infections (HAI) and antimicrobial resistance. It demonstrates how epidemiological surveillance data and modeling can be used to infer transmission dynamics in healthcare settings and to forecast healthcare impact, how modeling can be used to improve the validity of interpretation of epidemiological surveillance data, how modeling can be used to estimate the impact of IPC interventions, and how modeling can be used to guide IPC and antimicrobial treatment and stewardship decision-making. There are several priority areas for expanding the use of modeling in healthcare epidemiology and IPC. Importantly, modeling should be viewed as complementary to conventional healthcare epidemiological approaches, and this requires collaboration and active coordination between IPC, healthcare epidemiology, and mathematical modeling groups.
Several factors shape the neurodevelopmental trajectory. A key area of focus in neurodevelopmental research is to estimate the factors that have maximal influence on the brain and can tip the balance from typical to atypical development.
Methods
Utilizing a dissimilarity maximization algorithm on the dynamic mode decomposition (DMD) of the resting state functional MRI data, we classified subjects from the cVEDA neurodevelopmental cohort (n = 987, aged 6–23 years) into homogeneously patterned DMD (representing typical development in 809 subjects) and heterogeneously patterned DMD (indicative of atypical development in 178 subjects).
Results
Significant DMD differences were primarily identified in the default mode network (DMN) regions across these groups (p < 0.05, Bonferroni corrected). While the groups were comparable in cognitive performance, the atypical group had more frequent exposure to adversities and faced higher abuses (p < 0.05, Bonferroni corrected). Upon evaluating brain-behavior correlations, we found that correlation patterns between adversity and DMN dynamic modes exhibited age-dependent variations for atypical subjects, hinting at differential utilization of the DMN due to chronic adversities.
Conclusion
Adversities (particularly abuse) maximally influence the DMN during neurodevelopment and lead to the failure in the development of a coherent DMN system. While DMN's integrity is preserved in typical development, the age-dependent variability in atypically developing individuals is contrasting. The flexibility of DMN might be a compensatory mechanism to protect an individual in an abusive environment. However, such adaptability might deprive the neural system of the faculties of normal functioning and may incur long-term effects on the psyche.
Neurocognitive decline is prevalent in patients with metastatic cancers, attributed to various disease, treatment, and individual factors. Whether the presence of brain metastases (BrMets) contributes to neurocognitive decline is unclear. Aims of this study are to examine neurocognitive performance in BrMets patients and compare findings to patients with advanced metastatic cancer without BrMets. Here, we present baseline findings from an ongoing, prospective longitudinal study.
Participants and Methods:
English-speaking adults with advanced metastatic cancers were recruited from the brain metastases and lung clinics at the Princess Margaret Cancer Centre. Participants completed standardized tests (WTAR, HVLT-R, BVMT-R, COWAT, Trailmaking test, WAIS-IV Digit Span) and questionnaires (FACT-Cog v3, EORTC-QLQ C30 and BN20, PROMIS Depression(8a) and Anxiety(6a)) prior to cranial radiotherapy for those who required it. Test scores were converted to z-scores based on published normative data and averaged to create a composite neurocognitive performance score and domain scores for memory, attention/working memory, processing speed and executive function. Neurocognitive impairment was defined according to International Cancer and Cognition Task Force criteria. Univariate and multivariate regressions were used to identify individual, disease and treatment variables that predict cognitive performance.
Results:
76 patients (mean (SD) age: 63.2 (11.7) years; 53% male) with BrMets were included. 61% experienced neurocognitive impairment overall; impairment rates varied across domains (38% memory, 39% executive functioning, 13% attention/working memory, 8% processing speed). BrMets quantity, volume, and location were not associated with neurocognitive performance. Better performance status (ECOG; ß[95%CI];-0.38[-0.70,-0.05], p=0.021), higher premorbid IQ (0.34[0.10,0.58], p=0.005) and greater cognitive concerns (0.02[-3.9e-04,0.04], p=0.051) were associated with better neurocognitive performance in univariate analyses. Only premorbid IQ (0.37[0.14,0.60], p=0.003) and cognitive concerns (0.02[0.0004, 0.03], p=0.05) remained significant in multivariate analysis. We also recruited 31 patients with metastatic non-small cell lung cancer (mNSCLC) with no known BrMets (age: 67.5 (8.3); 32% male) and compared them to the subgroup of BrMets patients in our sample with mNSCLC (N=32; age: 67.8 (11.7); 53% male). We found no differences in impairment rates (BrMets/non-BrMets: Cognitive Composite, 59%/55%; Memory, 31%/32%; Executive Functioning, 35%/29%; Attention/working memory, 16%/13%; Processing speed, 7%/6%; Wilcoxon rank-sum test, all p-value’s > 0.5). The presence or absence of BrMets did not predict neurocognitive performance. Among patients with mNSCLC, higher education (0.11[0.03,0.18], p=0.004) and premorbid IQ (0.36[0.12,0.61], p=0.003), fewer days since primary diagnosis (0.00290[-0.0052,-0.0005], p=0.015) fewer pack-years smoking history (0.01[0.02,-0.001], p=0.027) and greater cognitive concerns (0.02[7e-5,0.04], p=0.045) were associated with better neurocognitive performance in univariate analyses; only premorbid IQ (0.26[0.02,0.51], p=0.04) and cognitive concerns (0.02[0.01,0.04], p=0.02) remained significant in multivariate analysis.
Conclusions:
Cognitive impairment is prevalent in patients with advanced metastatic cancers, particularly affecting memory and executive functioning. However, 39% of patients in our sample were not impaired in any domain. We found no associations between the presence of BrMets and neurocognitive function in patients with advanced cancers prior to cranial radiation. Premorbid IQ, a proxy for cognitive reserve, was associated with cognitive outcomes in our sample. Our longitudinal study will allow us to identify risk and resilience factors associated with neurocognitive changes in patients with metastatic cancers to better inform therapeutic interventions in this population.
In the UK over 12,400 yearly cases of head and neck cancers are reported (2021). Pharyngolaryngeal biopsies (OLB) may improve the speed of diagnosis and treatment of head and neck cancers under local anesthetic. The Scottish Health Technologies Group (SHTG) published advice on this technology in 2018. Since this, additional evidence has been published to warrant a health technology assessment (HTA) for Wales. The aim of this review was to provide update on the clinical and cost-effectiveness of OLB when compared to undergoing biopsy in an operating theatre (OTB) under general anesthetic to inform decision making in Wales.
Methods
A rapid review was undertaken of relevant databases since 2018 of the clinical evidence, health economics and patient perspectives relevant to Wales. Health Technology Wales (HTW) developed a de-novo cost-utility analysis comparing OLB to OTB over a lifetime horizon. Inputs were sourced from the SHTG budget impact analysis, updated with values more relevant to a Welsh setting.
Results
From consultation to biopsy procedure, the mean number of days was 1.3 for OLB compared to 17.4 days under OTB (p < 0.05). The mean time from consultation to start of treatment was 27 days for OLB compared to 41.5 days for OTB (p < 0.05). The economic analysis found a resulting ICER of GBP21,011 (EUR23,824.23) in a population with 2,183 at risk patients. As OLB was associated with lower costs (GBP816 per person) (EUR925.26) and fewer quality adjusted life years than OTB (-0.04), this ICER corresponds to OLB being considered a cost-effective diagnostic strategy.
Conclusions
HTW guidance was able to recommend use of OLB within the diagnostic pathway for head and neck cancers within Wales. For people with a positive test, OLB is sufficient to confirm a diagnosis but should not be used to rule out a diagnosis due to the potential in reducing the time to diagnosis and treatment in a cost-saving way.
Background: Neutropenic fever management decisions are complex and result in prolonged duration of broad-spectrum antibiotics. Strategies for antibiotic stewardship in this context have been studied, including de-escalation of antibiotics prior to resolution of neutropenia, with unclear implementation. Here, we present the first survey study to describe real-world neutropenic fever management practices in US healthcare institutions, with particular emphasis on de-escalation strategies after initiation of broad-spectrum antibiotics. Methods: Using REDCap, we conducted a survey of US healthcare institutions through the SHEA Research Network (SRN). Questions pertained to antimicrobial prophylaxis and supportive care in the management of oncology patients and neutropenic fever management (including specific antimicrobial choices and clinical scenarios). Hematologic malignancy hospitalization (2020) and bone-marrow transplantation (2016–2020) volumes were obtained from CMS and Health Resources & Services Administration databases, respectively. Results: Overall, 23 complete responses were recorded (response rate, 35.4%). Collectively, these entities account for ~11.0% of hematologic malignancy hospitalizations and 13.3% bone marrow transplantations nationwide. Of 23 facilities, 19 had institutional guidelines for neutropenic fever management and 18 had institutional guidelines for prophylaxis, with similar definitions for neutropenic fever. Firstline treatment universally utilized antipseudomonal broad-spectrum IV antibiotics (20 of 23 use cephalosporin, 3 of 23 use penicillin agent, and no respondents use carbapenem). Fluoroquinolone prophylaxis was common for leukemia induction patients (18 of 23) but was mixed for bone-marrow transplantation (10 of 23). We observed significant heterogeneity in treatment decisions. For stable neutropenic fever patients with no clinical source of infection identified, 13 of 23 respondents continued IV antibiotics until ANC (absolute neutrophil count) recovery. The remainder had criteria for de-escalation back to prophylaxis prior to this (eg, a fever-free period). Respondents were more willing to de-escalate prior to ANC recovery in patients with identified clinical sources (14 of 23 de-escalations in patients with pneumonia) or microbiological sources (15 of 23 de-escalations in patients with bacteremia) after dedicated treatment courses. In free-text responses, several respondents described opportunities for more systemic de-escalation for antimicrobial stewardship in these scenarios. Conclusions: Our results illustrate the real-world management of neutropenic fever in US hospitals, including initiation of therapy, prophylaxis, and treatment duration. We found significant heterogeneity in de-escalation of empiric antibiotics relative to ANC recovery, highlighting a need for more robust evidence for and adoption of this practice.
Data from a national survey of 348 U.S. sports field managers were used to examine the effects of participation in Cooperative Extension events on the adoption of turfgrass weed management practices. Of the respondents, 94% had attended at least one event in the previous 3 yr. Of this 94%, 97% reported adopting at least one practice as a result of knowledge gained at an Extension turfgrass event. Half of the respondents had adopted four or more practices; a third adopted five or more practices. Nonchemical, cultural practices were the most-adopted practices (65% of respondents). Multiple regression analysis was used to examine factors explaining practice adoption and Extension event attendance. Compared to attending one event, attending three events increased total adoption by an average of one practice. Attending four or more events increased total adoption by two practices. Attending four or more events (compared to one event) increased the odds of adopting six individual practices by 3- to 6-fold, depending on the practice. This suggests that practice adoption could be enhanced by encouraging repeat attendance among past Extension event attendees. Manager experience was a statistically significant predictor of the number of Extension events attended but a poor direct predictor of practice adoption. Experience does not appear to increase adoption directly, but indirectly, via its impact on Extension event attendance. In addition to questions about weed management generally, the survey asked questions specifically about annual bluegrass management. Respondents were asked to rank seven sources of information for their helpfulness in managing annual bluegrass. There was no single dominant information source, but Extension was ranked more than any other source as the most helpful (by 22% of the respondents) and was ranked among the top three by 53%, closely behind field representative/local distributor sources at 54%.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.