We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
SHEA, in partnership with ASGE, APIC, AAMI, AORN, HSPA, IDSA, SGNA, and The Joint Commission, developed this multisociety infection prevention guidance document for individuals and organizations that engage in sterilization or high-level disinfection (HLD). This document follows the CDC Guideline for Disinfection and Sterilization in Healthcare Facilities. This guidance is based on a synthesis of published scientific evidence, theoretical rationale, current practices, practical considerations, writing group consensus, and consideration of potential harm when applicable. The supplementary material includes a summary of recommendations. The guidance provides an overview of the Spaulding Classification and considerations around manufacturers’ instructions for use (MIFUs). Its recommendations address: point-of-use treatment prior to sterilization or HLD, preparation of reusable medical devices at the location of processing, sterilization, and immediate use steam sterilization (IUSS), HLD of lumened and non-lumened devices, processing of reusable medical devices used with lubricating or defoaming agents, monitoring for effectiveness of processing, handling of devices after HLD, augments and alternatives to HLD, processing of investigational devices, tracking of reusable medical devices, and approaches to implementation.
Studies show stimulant medications are effective for different ADHD presentations (predominantly inattentive [IA], predominantly hyperactive-impulsive [HI] or combined [C]); however, few studies have evaluated nonstimulant efficacy in different ADHD presentations. Viloxazine ER [VLX ER] is a nonstimulant, FDA-approved medication for pediatric (≥6 yrs) and adult ADHD. This post-hoc analysis of 4 double-blind (DB), Phase 3, clinical trials (2 in adolescents [NCT03247517 and NCT03247556], 2 in children [NCT03247530 and NCT03247543]), evaluates VLX ER efficacy by ADHD presentation as derived from ADHD Rating Scale, 5th Edition (ADHD-RS-5) assessments at Baseline.
Methods
Children and adolescents with ADHD and an ADHD-RS-5 Total score ≥ 28 were eligible for enrollment. ADHD presentation was defined as a rating of ≥2 on at least 6 of 9 ADHD-RS-5 inattention items, or hyperactive-impulsive items or both. For each ADHD presentation, the change from Baseline (CFB) in ADHD-RS-5 Total score (primary outcome in each study) was assessed using mixed models for repeated measures (MMRM). Responder rate (secondary outcome), ≥50% reduction from baseline in ADHD-RS-5 Total score, was analyzed using generalized estimating equations (GEE).
Results
Of 1354 subjects [placebo N = 452, VLX ER N = 902], ADHD presentation was assigned as 288 (21.3%) [IA], 1010 (74.5%) [C], 40 (3.0%) [HI], 16 (1.2%) [none of these]. Due to the small sample size of [HI], only the [IA] and [C] results are presented. At Week 6 (pooled data endpoint), ADHD-RS-5 Total scores were significantly improved for VLX ER relative to placebo for both the [IA] and [C] ADHD presentations. LS mean (SE) treatment differences, p-values were: [IA] -3.1 (1.35), p = 0.0219, and [C] 5.8 (0.97), p < 0.0001. Responder rates were also significantly higher for VLX ER: 43.0% [IA] and 42.7% [C] relative to placebo 29.5% [IA] and 25.5 % [C] (p=.0311 and p<.0001).
Conclusions
Viloxazine ER significantly reduced ADHD symptoms in individuals meeting criteria for ADHD [IA] or [C] presentations at Baseline. Limitations include post-hoc methodology, smaller sample sizes of [IA] and [HI] groups, and the ADHD-RS-5 ≥ 28 eligibility requirement, that may favor enrollment of individuals with ADHD [C] over ADHD [IA] or [HI] presentations. Consistency of response during long-term use should be evaluated.
Inpatient antibiotic use increased during the early phases of the COVID-19 pandemic. We sought to determine whether these changes persisted in persons with and without COVID-19 infection.
Design:
Retrospective cohort analysis.
Setting:
108 Veterans Affairs (VA) facilities.
Patients:
Persons receiving acute inpatient care from January 2016 to October 2022.
Methods:
Data on antibacterial use, patient days present, and COVID-19 care were extracted from the VA Corporate Data Warehouse. Days of therapy (DOT) per 1000 days present (DP) were calculated and stratified by Centers for Disease Control and Prevention-defined antibiotic classes.
Results:
Antibiotic use increased from 534 DOT/1000 DP in 11/2019–2/2020 to 588 DOT/1000 DP in 3/2020–4/2020. Subsequently, antibiotic use decreased such that total DOT/1000 DP was 2% less in 2020 as a whole than in 2019. Driven by treatment for community acquired pneumonia, antibiotic use was 30% higher in persons with COVID-19 than in uninfected persons in 3/2020–4/2020, but only 4% higher for the remainder of 2020. In 2022 system-wide antibiotic use was 9% less in persons with COVID-19; however, antibiotic use remained higher in persons with COVID-19 in 25% of facilities.
Discussion:
Although antibiotic use increased during the early phases of the COVID-19 pandemic, overall use subsequently decreased to below previous baseline levels and, in 2022, was less in persons with COVID-19 than in persons without COVID-19. However, further work needs to be done to address variances across facilities and to determine whether current levels of antibiotic use in persons with COVID-19 are justified.
What explains right-wing radicalization in the United States? Existing research emphasizes demographic changes, economic insecurity, and elite polarization. This paper highlights an additional factor: the impact of foreign wars on society at home. We argue communities that bear the greatest costs of foreign wars are prone to higher rates of right-wing radicalization. To support this claim, we present robust correlations between activity on Parler, a predominantly right-wing social media platform, and fatalities among residents who served in U.S. wars in Iraq and Afghanistan, at both the county and census tract level. The findings contribute to understanding right-wing radicalization in the US in two key respects. First, it examines widespread, nonviolent radical-right activity that, because it is less provocative than protest and violence, has eluded systematic measurement. Second, it highlights that U.S. foreign wars have important implications for domestic politics beyond partisanship and voting, to potentially include radicalization.
Knowledge graphs have become a common approach for knowledge representation. Yet, the application of graph methodology is elusive due to the sheer number and complexity of knowledge sources. In addition, semantic incompatibilities hinder efforts to harmonize and integrate across these diverse sources. As part of The Biomedical Translator Consortium, we have developed a knowledge graph–based question-answering system designed to augment human reasoning and accelerate translational scientific discovery: the Translator system. We have applied the Translator system to answer biomedical questions in the context of a broad array of diseases and syndromes, including Fanconi anemia, primary ciliary dyskinesia, multiple sclerosis, and others. A variety of collaborative approaches have been used to research and develop the Translator system. One recent approach involved the establishment of a monthly “Question-of-the-Month (QotM) Challenge” series. Herein, we describe the structure of the QotM Challenge; the six challenges that have been conducted to date on drug-induced liver injury, cannabidiol toxicity, coronavirus infection, diabetes, psoriatic arthritis, and ATP1A3-related phenotypes; the scientific insights that have been gleaned during the challenges; and the technical issues that were identified over the course of the challenges and that can now be addressed to foster further development of the prototype Translator system. We close with a discussion on Large Language Models such as ChatGPT and highlight differences between those models and the Translator system.
To describe national trends in testing and detection of carbapenemasesproduced by carbapenem-resistant Enterobacterales (CRE) and associatetesting with culture and facility characteristics.
Design:
Retrospective cohort study.
Setting:
Department of Veterans’ Affairs medical centers (VAMCs).
Participants:
Patients seen at VAMCs between 2013 and 2018 with cultures positive for CRE,defined by national VA guidelines.
Interventions:
Microbiology and clinical data were extracted from national VA data sets.Carbapenemase testing was summarized using descriptive statistics.Characteristics associated with carbapenemase testing were assessed withbivariate analyses.
Results:
Of 5,778 standard cultures that grew CRE, 1,905 (33.0%) had evidence ofmolecular or phenotypic carbapenemase testing and 1,603 (84.1%) of these hadcarbapenemases detected. Among these cultures confirmed ascarbapenemase-producing CRE, 1,053 (65.7%) had molecular testing for≥1 gene. Almost all testing included KPC (n = 1,047, 99.4%), with KPCdetected in 914 of 1,047 (87.3%) cultures. Testing and detection of otherenzymes was less frequent. Carbapenemase testing increased over the studyperiod from 23.5% of CRE cultures in 2013 to 58.9% in 2018. The South USCensus region (38.6%) and the Northeast (37.2%) region had the highestproportion of CRE cultures with carbapenemase testing. High complexity (vslow) and urban (vs rural) facilities were significantly associated withcarbapenemase testing (P < .0001).
Conclusions:
Between 2013 and 2018, carbapenemase testing and detection increased in theVA, largely reflecting increased testing and detection of KPC. Surveillanceof other carbapenemases is important due to global spread and increasingantibiotic resistance. Efforts supporting the expansion of carbapenemasetesting to low-complexity, rural healthcare facilities and standardizationof reporting of carbapenemase testing are needed.
To assess the validity of Antigen rapid diagnostic tests (Ag-RDT) for SARS-CoV-2 as decision support tool in various hospital-based clinical settings.
Design:
Retrospective cohort study among symptomatic and asymptomatic patients and Healthcare workers (HCW).
Setting:
A large tertiary teaching medical center serving as a major COVID-19 hospitalizing facility.
Participants and Methods:
Ag-RDTs’ performance was assessed in three clinical settings: 1. Symptomatic patients and HCW presenting at the Emergency Departments 2. Asymptomatic patients screened upon hospitalization 3. HCW of all sectors tested at the HCW clinic following exposure.
Results:
We obtained 5172 samples from 4595 individuals, who had both Ag-RDT and quantitative real-time PCR (qRT-PCR) results available. Of these, 485 samples were positive by qRT-PCR. The positive percent agreement (PPA) of Ag-RDT was greater for lower cycle threshold (Ct) values, reaching 93% in cases where Ct-value was <25 and 85% where Ct-value was <30. PPA was similar between symptomatic and asymptomatic individuals. We observed a significant correlation between Ct-value and time from infection onset (p<0.001).
Conclusions:
Ag-RDT are highly sensitive to the infectious stage of COVID-19 manifested by either high viral load (lower Ct) or proximity to infection, whether patient is symptomatic or asymptomatic. Thus, this simple-to-use and inexpensive detection method can be used as a decision support tool in various in-hospital clinical settings, assisting patient flow and maintaining sufficient hospital staffing.
Evidence suggests a link between smaller hippocampal volume (HV) and post-traumatic stress disorder (PTSD). However, there has been little prospective research testing this question directly and it remains unclear whether smaller HV confers risk or is a consequence of traumatization and PTSD.
Methods
U.S. soldiers (N = 107) completed a battery of clinical assessments, including structural magnetic resonance imaging pre-deployment. Once deployed they completed monthly assessments of traumatic-stressors and symptoms. We hypothesized that smaller HV would potentiate the effects of traumatic stressors on PTSD symptoms in theater. Analyses evaluated whether total HV, lateral (right v. left) HV, or HV asymmetry (right – left) moderated the effects of stressor-exposure during deployment on PTSD symptoms.
Results
Findings revealed no interaction between total HV and average monthly traumatic-stressors on PTSD symptoms b = −0.028, p = 0.681 [95% confidence interval (CI) −0.167 to 0.100]. However, in the context of greater exposure to average monthly traumatic stressors, greater right HV was associated with fewer PTSD symptoms b = −0.467, p = 0.023 (95% CI −0.786 to −0.013), whereas greater left HV was unexpectedly associated with greater PTSD symptoms b = 0.435, p = 0.024 (95% CI 0.028–0.715).
Conclusions
Our findings highlight the importance of considering the complex role of HV, in particular HV asymmetry, in predicting the emergence of PTSD symptoms in response to war-zone trauma.
The federal Safe Drinking Water Act (SDWA), as amended in 1996, enables benefit-cost analysis (BCA) to be used in setting federal drinking water standards, known as MCLs. While BCAs are typically conceived of as a tool to inform efficiency considerations by helping to identify MCL options that maximize net social benefits, in this paper we also illustrate how important equity and affordability considerations can be brought to light by suitably applying BCAs to drinking water regulations, especially in the context of communities served by relatively small water systems. We examine the applicability and relevance of health-health analysis (HHA), and provide an empirical evaluation of the risk tradeoffs that may be associated with the MCL established for arsenic. We find that the cost-associated risks may offset a nontrivial portion of the cancer risk reduction benefits attributed to the MCL (e.g., the additional adverse health impacts from the costs may be roughly half as large as the number of cancer cases avoided). This reveals the relevance of using the HHA approach for examining net benefits of MCLs in small drinking water utilities, and raises issues related to whether and how these cost-associated health risks should be considered in BCAs for drinking water standards.
Antibiotic prescribing practices across the Veterans’ Health Administration (VA) experienced significant shifts during the coronavirus disease 2019 (COVID-19) pandemic. From 2015 to 2019, antibiotic use between January and May decreased from 638 to 602 days of therapy (DOT) per 1,000 days present (DP), while the corresponding months in 2020 saw antibiotic utilization rise to 628 DOT per 1,000 DP.
A survey of Veterans’ Affairs Medical Centers on control of carbapenem-resistant Enterobacteriaceae (CRE) and carbapenem-producing CRE (CP-CRE) demonstrated that most facilities use VA guidelines but few screen for CRE/CP-CRE colonization regularly or regularly communicate CRE/CP-CRE status at patient transfer. Most respondents were knowledgeable about CRE guidelines but cited lack of adequate resources.
The following position statement from the Union of the European Phoniatricians, updated on 25th May 2020 (superseding the previous statement issued on 21st April 2020), contains a series of recommendations for phoniatricians and ENT surgeons who provide and/or run voice, swallowing, speech and language, or paediatric audiology services.
Objectives
This material specifically aims to inform clinical practices in countries where clinics and operating theatres are reopening for elective work. It endeavours to present a current European view in relation to common procedures, many of which fall under the aegis of aerosol generating procedures.
Conclusion
As evidence continues to build, some of the recommended practices will undoubtedly evolve, but it is hoped that the updated position statement will offer clinicians precepts on safe clinical practice.
There is significant interest in the use of angiotensin converting enzyme inhibitors (ACE-I) and angiotensin II receptor blockers (ARB) in coronavirus disease 2019 (COVID-19) and concern over potential adverse effects since these medications upregulate the severe acute respiratory syndrome coronavirus 2 host cell entry receptor ACE2. Recent studies on ACE-I and ARB in COVID-19 were limited by excluding outpatients, excluding patients by age, analyzing ACE-I and ARB together, imputing missing data, and/or diagnosing COVID-19 by chest computed tomography without definitive reverse transcription polymerase chain reaction (RT-PCR), all of which are addressed here.
Methods:
We performed a retrospective cohort study of 1023 COVID-19 patients diagnosed by RT-PCR at Stanford Hospital through April 8, 2020 with a minimum follow-up time of 14 days to investigate the association between ACE-I or ARB use with outcomes.
Results:
Use of ACE-I or ARB medications was not associated with increased risk of hospitalization, intensive care unit admission, or death. Compared to patients with charted past medical history, there was a lower risk of hospitalization for patients on ACE-I (odds ratio (OR) 0.43; 95% confidence interval (CI) 0.19–0.97; P = 0.0426) and ARB (OR 0.39; 95% CI 0.17–0.90; P = 0.0270). Compared to patients with hypertension not on ACE-I or ARB, patients on ARB medications had a lower risk of hospitalization (OR 0.09; 95% CI 0.01–0.88; P = 0.0381).
Conclusions:
These findings suggest that the use of ACE-I and ARB is not associated with adverse outcomes and may be associated with improved outcomes in COVID-19, which is immediately relevant to care of the many patients on these medications.
Given the rapidly progressing coronavirus disease 2019 (COVID-19) pandemic, this report on a US cohort of 54 COVID-19 patients from Stanford Hospital and data regarding risk factors for severe disease obtained at initial clinical presentation is highly important and immediately clinically relevant. We identified low presenting oxygen saturation as predictive of severe disease outcomes, such as diagnosis of pneumonia, acute respiratory distress syndrome, and admission to the intensive care unit, and also replicated data from China suggesting an association between hypertension and disease severity. Clinicians will benefit by tools to rapidly risk stratify patients at presentation by likelihood of progression to severe disease.
Aberrant activity of the subcallosal cingulate (SCC) is a common theme across pharmacologic treatment efficacy prediction studies. The functioning of the SCC in psychotherapeutic interventions is relatively understudied, as are functional differences among SCC subdivisions. We conducted functional connectivity analyses (rsFC) on resting-state functional magnetic resonance imaging (fMRI) data, collected before and after a course of cognitive behavioral therapy (CBT) in patients with major depressive disorder (MDD), using seeds from three SCC subdivisions.
Methods.
Resting-state data were collected from unmedicated patients with current MDD (Hamilton Depression Rating Scale-17 > 16) before and after 14-sessions of CBT monotherapy. Treatment outcome was assessed using the Beck Depression Inventory (BDI). Rostral anterior cingulate (rACC), anterior subcallosal cingulate (aSCC), and Brodmann’s area 25 (BA25) masks were used as seeds in connectivity analyses that assessed baseline rsFC and symptom severity, changes in connectivity related to symptom improvement after CBT, and prediction of treatment outcomes using whole-brain baseline connectivity.
Results.
Pretreatment BDI negatively correlated with pretreatment rACC ~ dorsolateral prefrontal cortex and aSCC ~ lateral prefrontal cortex rsFC. In a region-of-interest longitudinal analysis, rsFC between these regions increased post-treatment (p < 0.05FDR). In whole-brain analyses, BA25 ~ paracentral lobule and rACC ~ paracentral lobule connectivities decreased post-treatment. Whole-brain baseline rsFC with SCC did not predict clinical improvement.
Conclusions.
rsFC features of rACC and aSCC, but not BA25, correlated inversely with baseline depression severity, and increased following CBT. Subdivisions of SCC involved in top-down emotion regulation may be more involved in cognitive interventions, while BA25 may be more informative for interventions targeting bottom-up processing. Results emphasize the importance of subdividing the SCC in connectivity analyses.
To evaluate the National Health Safety Network (NHSN) hospital-onset Clostridioides difficile infection (HO-CDI) standardized infection ratio (SIR) risk adjustment for general acute-care hospitals with large numbers of intensive care unit (ICU), oncology unit, and hematopoietic cell transplant (HCT) patients.
Design:
Retrospective cohort study.
Setting:
Eight tertiary-care referral general hospitals in California.
Methods:
We used FY 2016 data and the published 2015 rebaseline NHSN HO-CDI SIR. We compared facility-wide inpatient HO-CDI events and SIRs, with and without ICU data, oncology and/or HCT unit data, and ICU bed adjustment.
Results:
For these hospitals, the median unmodified HO-CDI SIR was 1.24 (interquartile range [IQR], 1.15–1.34); 7 hospitals qualified for the highest ICU bed adjustment; 1 hospital received the second highest ICU bed adjustment; and all had oncology-HCT units with no additional adjustment per the NHSN. Removal of ICU data and the ICU bed adjustment decreased HO-CDI events (median, −25%; IQR, −20% to −29%) but increased the SIR at all hospitals (median, 104%; IQR, 90%–105%). Removal of oncology-HCT unit data decreased HO-CDI events (median, −15%; IQR, −14% to −21%) and decreased the SIR at all hospitals (median, −8%; IQR, −4% to −11%).
Conclusions:
For tertiary-care referral hospitals with specialized ICUs and a large number of ICU beds, the ICU bed adjustor functions as a global adjustment in the SIR calculation, accounting for the increased complexity of patients in ICUs and non-ICUs at these facilities. However, the SIR decrease with removal of oncology and HCT unit data, even with the ICU bed adjustment, suggests that an additional adjustment should be considered for oncology and HCT units within general hospitals, perhaps similar to what is done for ICU beds in the current SIR.
Although death by neurologic criteria (brain death) is legally recognized throughout the United States, state laws and clinical practice vary concerning three key issues: (1) the medical standards used to determine death by neurologic criteria, (2) management of family objections before determination of death by neurologic criteria, and (3) management of religious objections to declaration of death by neurologic criteria. The American Academy of Neurology and other medical stakeholder organizations involved in the determination of death by neurologic criteria have undertaken concerted action to address variation in clinical practice in order to ensure the integrity of brain death determination. To complement this effort, state policymakers must revise legislation on the use of neurologic criteria to declare death. We review the legal history and current laws regarding neurologic criteria to declare death and offer proposed revisions to the Uniform Determination of Death Act (UDDA) and the rationale for these recommendations.