We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Because pediatric anxiety disorders precede the onset of many other problems, successful prediction of response to the first-line treatment, cognitive-behavioral therapy (CBT), could have a major impact. This study evaluates whether structural and resting-state functional magnetic resonance imaging can predict post-CBT anxiety symptoms.
Methods
Two datasets were studied: (A) one consisted of n = 54 subjects with an anxiety diagnosis, who received 12 weeks of CBT, and (B) one consisted of n = 15 subjects treated for 8 weeks. Connectome predictive modeling (CPM) was used to predict treatment response, as assessed with the PARS. The main analysis included network edges positively correlated with treatment outcome and age, sex, and baseline anxiety severity as predictors. Results from alternative models and analyses are also presented. Model assessments utilized 1000 bootstraps, resulting in a 95% CI for R2, r, and mean absolute error (MAE).
Results
The main model showed a MAE of approximately 3.5 (95% CI: [3.1–3.8]) points, an R2 of 0.08 [−0.14–0.26], and an r of 0.38 [0.24–0.511]. When testing this model in the left-out sample (B), the results were similar, with an MAE of 3.4 [2.8–4.7], R2−0.65 [−2.29–0.16], and r of 0.4 [0.24–0.54]. The anatomical metrics showed a similar pattern, where models rendered overall low R2.
Conclusions
The analysis showed that models based on earlier promising results failed to predict clinical outcomes. Despite the small sample size, this study does not support the extensive use of CPM to predict outcomes in pediatric anxiety.
We measure the other-regarding behavior in samples from three related populations in the upper Midwest of the United States: college students, non-student adults from the community surrounding the college, and adult trainee truckers in a residential training program. The use of typical experimental economics recruitment procedures made the first two groups substantially self-selected. Because the context reduced the opportunity cost of participating dramatically, 91 % of the adult trainees solicited participated, leaving little scope for self-selection in this sample. We find no differences in the elicited other-regarding preferences between the self-selected adults and the adult trainees, suggesting that selection is unlikely to bias inferences about the prevalence of other-regarding preferences among non-student adult subjects. Our data also reject the more specific hypothesis that approval-seeking subjects are the ones most likely to select into experiments. Finally, we observe a large difference between self-selected college students and self-selected adults: the students appear considerably less pro-social.
Previous studies identified clusters of first-episode psychosis (FEP) patients based on cognition and premorbid adjustment. This study examined a range of socio-environmental risk factors associated with clusters of FEP, aiming a) to compare clusters of FEP and community controls using the Maudsley Environmental Risk Score for psychosis (ERS), a weighted sum of the following risks: paternal age, childhood adversities, cannabis use, and ethnic minority membership; b) to explore the putative differences in specific environmental risk factors in distinguishing within patient clusters and from controls.
Methods
A univariable general linear model (GLS) compared the ERS between 1,263 community controls and clusters derived from 802 FEP patients, namely, low (n = 223) and high-cognitive-functioning (n = 205), intermediate (n = 224) and deteriorating (n = 150), from the EU-GEI study. A multivariable GLS compared clusters and controls by different exposures included in the ERS.
Results
The ERS was higher in all clusters compared to controls, mostly in the deteriorating (β=2.8, 95% CI 2.3 3.4, η2 = 0.049) and the low-cognitive-functioning cluster (β=2.4, 95% CI 1.9 2.8, η2 = 0.049) and distinguished them from the cluster with high-cognitive-functioning. The deteriorating cluster had higher cannabis exposure (meandifference = 0.48, 95% CI 0.49 0.91) than the intermediate having identical IQ, and more people from an ethnic minority (meandifference = 0.77, 95% CI 0.24 1.29) compared to the high-cognitive-functioning cluster.
Conclusions
High exposure to environmental risk factors might result in cognitive impairment and lower-than-expected functioning in individuals at the onset of psychosis. Some patients’ trajectories involved risk factors that could be modified by tailored interventions.
Coronavirus disease-2019 precipitated the rapid deployment of novel therapeutics, which led to operational and logistical challenges for healthcare organizations. Four health systems participated in a qualitative study to abstract lessons learned, challenges, and promising practices from implementing neutralizing monoclonal antibody (nMAb) treatment programs. Lessons are summarized under three themes that serve as critical building blocks for health systems to rapidly deploy novel therapeutics during a pandemic: (1) clinical workflows, (2) data infrastructure and platforms, and (3) governance and policy. Health systems must be sufficiently agile to quickly scale programs and resources in times of uncertainty. Real-time monitoring of programs, policies, and processes can help support better planning and improve program effectiveness. The lessons and promising practices shared in this study can be applied by health systems for distribution of novel therapeutics beyond nMAbs and toward future pandemics and public health emergencies.
The association between cannabis and psychosis is established, but the role of underlying genetics is unclear. We used data from the EU-GEI case-control study and UK Biobank to examine the independent and combined effect of heavy cannabis use and schizophrenia polygenic risk score (PRS) on risk for psychosis.
Methods
Genome-wide association study summary statistics from the Psychiatric Genomics Consortium and the Genomic Psychiatry Cohort were used to calculate schizophrenia and cannabis use disorder (CUD) PRS for 1098 participants from the EU-GEI study and 143600 from the UK Biobank. Both datasets had information on cannabis use.
Results
In both samples, schizophrenia PRS and cannabis use independently increased risk of psychosis. Schizophrenia PRS was not associated with patterns of cannabis use in the EU-GEI cases or controls or UK Biobank cases. It was associated with lifetime and daily cannabis use among UK Biobank participants without psychosis, but the effect was substantially reduced when CUD PRS was included in the model. In the EU-GEI sample, regular users of high-potency cannabis had the highest odds of being a case independently of schizophrenia PRS (OR daily use high-potency cannabis adjusted for PRS = 5.09, 95% CI 3.08–8.43, p = 3.21 × 10−10). We found no evidence of interaction between schizophrenia PRS and patterns of cannabis use.
Conclusions
Regular use of high-potency cannabis remains a strong predictor of psychotic disorder independently of schizophrenia PRS, which does not seem to be associated with heavy cannabis use. These are important findings at a time of increasing use and potency of cannabis worldwide.
With the rise of online references, podcasts, webinars, self-test tools, and social media, it is worthwhile to understand whether textbooks continue to provide value in medical education, and to assess the capacity they serve during fellowship training.
Methods:
A prospective mixed-methods study based on surveys that were disseminated to seven paediatric cardiology fellowship programmes around the world. Participants were asked to read an assigned chapter of Anderson’s Pediatric Cardiology 4th Edition textbook, followed by the completion of the survey. Open-ended questions included theming and grouping responses as appropriate.
Results:
The survey was completed by 36 participants. When asked about the content, organisation, and utility of the chapter, responses were generally positive, at greater than 89%. The chapters, overall, were rated relatively easy to read, scoring at 6.91, with standard deviations plus or minus 1.72, on a scale from 1 to 10, with higher values meaning better results. When asked to rank their preferences in where they obtain educational content, textbooks were ranked the second highest, with in-person teaching ranking first. Several themes were identified including the limitations of the use of textbook use, their value, and ways to enhance learning from their reading. There was also a near-unanimous desire for more time to self-learn and read during fellowship.
Conclusions:
Textbooks are still highly valued by trainees. Many opportunities exist, nonetheless, to improve how they can be organised to deliver information optimally. Future efforts should look towards making them more accessible, and to include more resources for asynchronous learning.
We explore predictions of two models of one-dimensional capillary rise in rigid and partially saturated porous media. One is an existing one from the literature and the second is a free-boundary model based on Richards’ equation with two moving boundaries of the evolving partially saturated region. Both models involve the specification of saturation-dependent functions for local capillary pressure and permeability and connect to classical models for saturated porous media. Existing capillary-rise experiments show two notable regimes: (i) an early-time regime typically well-described by classical capillary-rise theory in a fully saturated porous media, and (ii) a long-time regime that has anomalous dynamics in which the capillary-rise height may scale with a non-classical power law in time or have more complicated dynamics. We demonstrate that the predictions of both models compare well with experimental capillary-rise data over early- and long-time regimes gathered from three independent studies in the literature. The model predictions also shed light on recent scaling laws that relate the capillary pressure and permeability of the partially saturated media to the capillary-rise height. We use these models to probe computationally observed permeability relationships to capillary-rise height. We demonstrate that a recently proposed permeability scaling for the anomalous capillary-rise regime is indeed realized and is particularly apparent in the lower portion of the partially saturated media. For our free-boundary model we also compute capillary pressure measures and show that these reveal the linear relation between the capillary pressure and capillary-rise height expected for a capillarity–gravity balance in the upper portion of the partially saturated porous media.
OBJECTIVES/GOALS: Contingency management (CM) procedures yield measurable reductions in cocaine use. This poster describes a trial aimed at using CM as a vehicle to show the biopsychosocial health benefits of reduced use, rather than total abstinence, the currently accepted metric for treatment efficacy. METHODS/STUDY POPULATION: In this 12-week, randomized controlled trial, CM was used to reduce cocaine use and evaluate associated improvements in cardiovascular, immune, and psychosocial well-being. Adults aged 18 and older who sought treatment for cocaine use (N=127) were randomized into three groups in a 1:1:1 ratio: High Value ($55) or Low Value ($13) CM incentives for cocaine-negative urine samples or a non-contingent control group. They completed outpatient sessions three days per week across the 12-week intervention period, totaling 36 clinic visits and four post-treatment follow-up visits. During each visit, participants provided observed urine samples and completed several assays of biopsychosocial health. RESULTS/ANTICIPATED RESULTS: Preliminary findings from generalized linear mixed effect modeling demonstrate the feasibility of the CM platform. Abstinence rates from cocaine use were significantly greater in the High Value group (47% negative; OR = 2.80; p = 0.01) relative to the Low Value (23% negative) and Control groups (24% negative;). In the planned primary analysis, the level of cocaine use reduction based on cocaine-negative urine samples will serve as the primary predictor of cardiovascular (e.g., endothelin-1 levels), immune (e.g., IL-10 levels) and psychosocial (e.g., Addiction Severity Index) outcomes using results from the fitted models. DISCUSSION/SIGNIFICANCE: This research will advance the field by prospectively and comprehensively demonstrating the beneficial effects of reduced cocaine use. These outcomes can, in turn, support the adoption of reduced cocaine use as a viable alternative endpoint in cocaine treatment trials.
A review of hospital-onset COVID-19 cases revealed 8 definite, 106 probable, and 46 possible cases. Correlations between hospital-onset cases and both HCW and inpatient cases were noted in 2021. Rises in community measures were associated with rises in hospital-onset cases. Measures of community COVID-19 activity might predict hospital-onset cases.
To evaluate temporal trends in the prevalence of gram-negative bacteria (GNB) with difficult-to-treat resistance (DTR) in the southeastern United States. Secondary objective was to examine the use of novel β-lactams for GNB with DTR by both antimicrobial use (AU) and a novel metric of adjusted AU by microbiological burden (am-AU).
Design:
Retrospective, multicenter, cohort.
Setting:
Ten hospitals in the southeastern United States.
Methods:
GNB with DTR including Enterobacterales, Pseudomonas aeruginosa, and Acinetobacter spp. from 2015 to 2020 were tracked at each institution. Cumulative AU of novel β-lactams including ceftolozane/tazobactam, ceftazidime/avibactam, meropenem/vaborbactam, imipenem/cilastatin/relebactam, and cefiderocol in days of therapy (DOT) per 1,000 patient-days was calculated. Linear regression was utilized to examine temporal trends in the prevalence of GNB with DTR and cumulative AU of novel β-lactams.
Results:
The overall prevalence of GNB with DTR was 0.85% (1,223/143,638) with numerical increase from 0.77% to 1.00% between 2015 and 2020 (P = .06). There was a statistically significant increase in DTR Enterobacterales (0.11% to 0.28%, P = .023) and DTR Acinetobacter spp. (4.2% to 18.8%, P = .002). Cumulative AU of novel β-lactams was 1.91 ± 1.95 DOT per 1,000 patient-days. When comparing cumulative mean AU and am-AU, there was an increase from 1.91 to 2.36 DOT/1,000 patient-days, with more than half of the hospitals shifting in ranking after adjustment for microbiological burden.
Conclusions:
The overall prevalence of GNB with DTR and the use of novel β-lactams remain low. However, the uptrend in the use of novel β-lactams after adjusting for microbiological burden suggests a higher utilization relative to the prevalence of GNB with DTR.
To understand how healthcare facilities employ contact precautions for patients with multidrug-resistant organisms (MDROs) in the post–coronavirus disease 2019 (COVID-19) era and explore changes since 2014.
Design:
Cross-sectional survey.
Participants:
Emerging Infections Network (EIN) physicians involved in infection prevention or hospital epidemiology.
Methods:
In September 2022, we sent via email an 8-question survey on contact precautions and adjunctive measures to reduce MDRO transmission in inpatient facilities. We also asked about changes since the COVID-19 pandemic. We used descriptive statistics to summarize data and compared results to a similar survey administered in 2014.
Results:
Of 708 EIN members, 283 (40%) responded to the survey and 201 reported working in infection prevention. A majority of facilities (66% and 69%) routinely use contact precautions for methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci (VRE) respectively, compared to 93% and 92% in 2014. Nearly all (>90%) use contact precautions for Candida auris, carbapenem-resistant Enterobacterales (CRE), and carbapenem-resistant Acinetobacter baumannii. More variability was reported for carbapenem-resistant Pseudomonas aeruginosa and extended-spectrum β-lactamase–producing gram-negative organisms. Compared to 2014, fewer hospitals perform active surveillance for MRSA and VRE. Overall, 90% of facilities used chlorhexidine gluconate bathing in all or select inpatients, and 53% used ultraviolet light or hydrogen peroxide vapor disinfection at discharge. Many respondents (44%) reported changes to contact precautions since COVID-19 that remain in place.
Conclusions:
Heterogeneity exists in the use of transmission-based precautions and adjunctive infection prevention measures aimed at reducing MDRO transmission. This variation reflects a need for updated and specific guidance, as well as further research on the use of contact precautions in healthcare facilities.
Translation is the process of turning observations in the research laboratory, clinic, and community into interventions that improve people’s health. The Clinical and Translational Science Awards (CTSA) program is a National Center for Advancing Translational Sciences (NCATS) initiative to advance translational science and research. Currently, 64 “CTSA hubs” exist across the nation. Since 2006, the Houston-based Center for Clinical Translational Sciences (CCTS) has assembled a well-integrated, high-impact hub in Texas that includes six partner institutions within the state, encompassing ∼23,000 sq. miles and over 16 million residents. To achieve the NCATS goal of “more treatments for all people more quickly,” the CCTS promotes diversity and inclusion by integrating underrepresented populations into clinical studies, workforce training, and career development. In May 2023, we submitted the UM1 application and six “companion” proposals: K12, R25, T32-Predoctoral, T32-Postdoctoral, and RC2 (two applications). In October 2023, we received priority scores for the UM1 (22), K12 (25), T32-Predoctoral (20), and T32-Postdoctoral (23), which historically fall within the NCATS funding range. This report describes the grant preparation and submission approach, coupled with data from an internal survey designed to assimilate feedback from principal investigators, writers, reviewers, and administrative specialists. Herein, we share the challenges faced, the approaches developed, and the lessons learned.
Despite infection control guidance, sporadic nosocomial coronavirus disease 2019 (COVID-19) outbreaks occur. We describe a complex severe acute respiratory coronavirus virus 2 (SARS-CoV-2) cluster with interfacility spread during the SARS-CoV-2 δ (delta) pandemic surge in the Midwest.
Setting:
This study was conducted in (1) a hematology-oncology ward in a regional academic medical center and (2) a geographically distant acute rehabilitation hospital.
Methods:
We conducted contact tracing for each COVID-19 case to identify healthcare exposures within 14 days prior to diagnosis. Liberal testing was performed for asymptomatic carriage for patients and staff. Whole-genome sequencing was conducted for all available clinical isolates from patients and healthcare workers (HCWs) to identify transmission clusters.
Results:
In the immunosuppressed ward, 19 cases (4 patients, 15 HCWs) shared a genetically related SARS-CoV-2 isolate. Of these 4 patients, 3 died in the hospital or within 1 week of discharge. The suspected index case was a patient with new dyspnea, diagnosed during preprocedure screening. In the rehabilitation hospital, 20 cases (5 patients and 15 HCWs) positive for COVID-19, of whom 2 patients and 3 HCWs had an isolate genetically related to the above cluster. The suspected index case was a patient from the immune suppressed ward whose positive status was not detected at admission to the rehabilitation facility. Our response to this cluster included the following interventions in both settings: restricting visitors, restricting learners, restricting overflow admissions, enforcing strict compliance with escalated PPE, access to on-site free and frequent testing for staff, and testing all patients prior to hospital discharge and transfer to other facilities.
Conclusions:
Stringent infection control measures can prevent nosocomial COVID-19 transmission in healthcare facilities with high-risk patients during pandemic surges. These interventions were successful in ending these outbreaks.
Clinical and experimental neuropsychology patients are not always able to complete a given test due to limitations in their functioning and it can lead to frustration and time wasted, leading researchers to examine the value of metrics that can be derived earlier in a test so as to ascertain and salvage useful information. The Trail Making Test (TMT) is an oft-utilized test of executive function and has been the focus of such exploration (e.g., first error vs. time to complete Trails B which can be lengthy in dementia cases and lead to discontinuation and loss of scorable data; Christidi et al., 2013; Correia et al., 2015). The present retrospective study utilized archival chart review to examine the association between a patient's diagnosis and occurrence of the first error on Trails B (TB1err).
Participants and Methods:
De-identified data was culled from adult private practice records (n=137) in the northeastern United States (the study was conducted in compliance with local IRB review). Trails A and B times, as well as Digit Span scores (for checking construct validity) were pulled from reports, and Trails B record forms were scored to extract the enumerated stimulus where any first error was observed in the patient's rendering of the trail connecting alternating numbers and letters. Paired t-tests compared the average TB1err of normative individuals (no diagnosis) with patients with a primary diagnosis of mood disorder, traumatic brain injury (TBI), mild cognitive impairment (MCI), or dementia. Additionally, Pearson's correlations were computed comparing TB1err with Trails B time, and another test of executive function (Digit Span backwards).
Results:
The order of diagnoses according to the average occurrence of the first error on Trails B (from later, to sooner occurrence) was as follows: normative (no diagnosis), mood disorder, TBI, MCI, and finally dementia. There was a significant difference on this first error metric (TB1err) when comparing normative and dementia patients (p = .03; 8.3 vs 4.2 for the average enumeration of 1st error on Trails B). Furthermore, significant correlations were found between this derived TB1err metric and Digit Span backwards (r = .31; p <.001) as well as overall TrailsB performance (r = -.39; p < .001).
Conclusions:
The present study adds to a growing literature on the utility of deriving test metrics to maximize useful data for clinical and experimental neuropsychology. Results from this retrospective chart review indicate additional validity data to support the use of extracting the first error on Trails B as a way to salvage useful data even when a patient may not be able to complete the full TMT as designed. In this preliminary sample there was a significant difference found for normative vs. dementia patients on this derived TB1err metric and suggests it is worthy of additional research to see if it can reliably differentiate various diagnoses. We expect this finding will also be useful in experimental designs wherein time is often limited and loss of data due to incomplete testing might be avoided by extracting the first error on TrailsB.
Vitruvius’ book is chock full of bodies. It is by means of his oiled body that Dinocrates gains an audience with Alexander (2.praef.1), and by means of his naked one, and the equivalent body of water that it displaces, that Archimedes solves his well-known quandary (9.praef.9f.). The body is a living, vital thing (7.praef.2f.), even as it is a book, a body of work (7.praef.10), arising from a body of education (1.1.12). Most important of all, Vitruvius outlines a ‘body of architecture’ (corpus architecturae, 6.praef.7) to propagate and extend the reach of the deified Augustus, commander-in-chief (1.praef.1), and this idea arguably gives rise to that of the ‘body politic’ (corpus imperii). Vitruvius’ text has been interpreted in terms of these bodies in important studies by Indra Kagis McEwen and John Oksanish; both authors treat the body as replete with meaning, the site of contact between architecture and the political, between text and author. This chapter adds to our understanding of the repertoire of bodies in Vitruvius by looking at those earlier incarnations of circular and square bodies which Vitruvius inherits, and in terms of which he construes the human body as a site of ideal proportions.
Background: The CDC recommends routine use of contact precautions for patients infected or colonized with multidrug-resistant organisms (MDROs). There is variability in implementation of and adherence to this recommendation, which we hypothesized may have been exacerbated by the COVID-19 pandemic. Methods: In September 2022, we emailed an 8-question survey to Emerging Infections Network (EIN) physician members with infection prevention and hospital epidemiology responsibilities. The survey asked about the respondent’s primary hospital’s recommendations on transmission-based precautions, adjunctive measures to reduce MDRO transmission, and changes that occurred during the COVID-19 pandemic. We sent 2 reminder emails over a 1-month period. We used descriptive statistics to summarize the data and to compare results to a similar EIN survey (n = 336) administered in 2014 (Russell D, et al. doi:10.1017/ice.2015.246). Results: Of 708 EIN members, 283 (40%) responded to the survey, and 201 were involved in infection prevention. Most respondents were adult infectious diseases physicians (n = 228, 80%) with at least 15 years of experience (n = 174, 63%). Respondents were well distributed among community, academic, and nonuniversity teaching facilities (Table 1). Most respondents reported that their facility routinely used CP for methicillin-resistant Staphylococcus aureus (MRSA, 66%) and vancomycin-resistant Enterococcus (VRE, 69%), compared to 93% and 92% respectively, in the 2014 survey. Nearly all (>90%) reported using contact precautions for Candida auris, carbapenem-resistant Enterobacterales (CRE), and carbapenem-resistant Acinetobacter spp, but there was variability in the use of contact precautions for carbapenem-resistant Pseudomonas aeruginosa and extended-spectrum β-lactamase–producing gram-negative organisms. In 2014, 81% reported that their hospital performed active surveillance testing for MRSA, and in 2022 this rate fell to 54% (Table 2). The duration of contact precautions varied by MDRO (Table 3). Compared to 2014, in 2022 facilities were less likely to use contact precautions indefinitely for MRSA (18% vs 6%) and VRE (31% vs 11%). Also, 180 facilities (90%) performed chlorhexidine bathing in at least some inpatients and 106 facilities (53%) used ultraviolet light or hydrogen peroxide vapor disinfection at discharge in some rooms. Furthermore, 89 facilities (44%) reported institutional changes to contact precautions policies after the start of the COVID-19 pandemic that remain in place. Conclusions: Use of contact precautions for patients with MDROs is heterogenous, and policies vary based on the organism. Although most hospitals still routinely use contact precautions for MRSA and VRE, this practice has declined substantially since 2014. Changes in contact-precaution policies may have been influenced by the COVID-19 pandemic, and more specifically, contemporary public health guidance is needed to define who requires contact precautions and for what duration.
Objective: To characterize hospital-onset COVID-19 cases and to investigate the associations between these rates and population and hospital-level rates including trends in healthcare worker infections (HCW), community cases, and COVID-19 wastewater data. Design: Retrospective cohort study from January 1, 2021, to November 23, 2022. Setting: This study was conducted at a 589-bed urban Midwestern tertiary-care hospital system. Participants and interventions: The infection prevention team reviewed the electronic medical records (EMR) of patients who were admitted for >48 hours and subsequently tested positive for SARS-CoV-2 to determine whether COVID-19 was likely to be hospital-onset illness. Each case was further categorized as definite, probable, or possible based on viral sequencing, caregiver tracing analysis, symptoms, and cycle threshold values. Patients were excluded if there was a known exposure prior to admission. Clinical data including vaccination status were collected from the EMR. HCW case data were collected via our institution’s employee health services. Community cases and wastewater data were collected via the Wisconsin Department of Health Services database. Additionally, we evaluated the timing of changes in infection prevention guidance such as visitor restrictions. Results: In total, 156 patients met criteria for hospital-onset COVID-19. Overall, 6% of cases were categorized as definite, 24% were probable, and 70% were possible hospital-onset illness. Most patients were tested prior to a procedure (31%), for new symptoms (30%), and for discharge planning (30%). Also, 53% were symptomatic and 41% received treatment for their COVID-19. Overall, 38% of patients were immunocompromised and 27% were unvaccinated. Overall, 12% of patients died within 1 month of their positive SARS-CoV-2 test, and 11% required ICU admission during their hospital stay. Hospital-onset COVID-19 increased in fall of 2022. Specifically, October 2022 had 16 cases, whereas fall of 2021 (September–November) only had 3 cases total. Finally, similar peaks were observed in total cases by week between healthcare workers, county cases, and COVID-19 wastewater levels. These peaks correspond with the SARS-CoV-2 delta and omicron variant surges, respectively. Conclusions: Hospital-onset cases followed similar trends as population and hospital-level data throughout the study period. However, hospital-onset rate did not correlate as strongly in the second half of 2022 when cases were disproportionately high. Given that hospital-onset cases can result in significant morbidity, continued enhanced infection prevention efforts and low threshold for testing are warranted in the inpatient environment.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
This narrative review updates the evidence base for cancer-related post-traumatic stress disorder (PTSD). Databases were searched in December 2021, and included EMBASE, Medline, PsycINFO and PubMed. Adults diagnosed with cancer who had symptoms of PTSD were included.
Results
The initial search identified 182 records, and 11 studies were included in the final review. Psychological interventions were varied, and cognitive–behavioural therapy and eye movement desensitisation and reprocessing were perceived to be most efficacious. The studies were also independently rated for methodological quality, which was found to be hugely variable.
Clinical implications
There remains a lack of high-quality intervention studies for PTSD in cancer, and there is a wide range of approaches to managing these conditions, with a large heterogeneity in the cancer populations examined and methodologies used. Specific studies designed with patient and public engagement and that tailor the PTSD intervention to particular cancer populations under investigation are required.
To determine risk factors for the development of long coronavirus disease 2019 (COVID-19) in healthcare personnel (HCP).
Methods:
We conducted a case–control study among HCP who had confirmed symptomatic COVID-19 working in a Brazilian healthcare system between March 1, 2020, and July 15, 2022. Cases were defined as those having long COVID according to the Centers for Disease Control and Prevention definition. Controls were defined as HCP who had documented COVID-19 but did not develop long COVID. Multiple logistic regression was used to assess the association between exposure variables and long COVID during 180 days of follow-up.
Results:
Of 7,051 HCP diagnosed with COVID-19, 1,933 (27.4%) who developed long COVID were compared to 5,118 (72.6%) who did not. The majority of those with long COVID (51.8%) had 3 or more symptoms. Factors associated with the development of long COVID were female sex (OR, 1.21; 95% CI, 1.05–1.39), age (OR, 1.01; 95% CI, 1.00–1.02), and 2 or more SARS-CoV-2 infections (OR, 1.27; 95% CI, 1.07–1.50). Those infected with the SARS-CoV-2 δ (delta) variant (OR, 0.30; 95% CI, 0.17–0.50) or the SARS-CoV-2 o (omicron) variant (OR, 0.49; 95% CI, 0.30–0.78), and those receiving 4 COVID-19 vaccine doses prior to infection (OR, 0.05; 95% CI, 0.01–0.19) were significantly less likely to develop long COVID.
Conclusions:
Long COVID can be prevalent among HCP. Acquiring >1 SARS-CoV-2 infection was a major risk factor for long COVID, while maintenance of immunity via vaccination was highly protective.