We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
During the COVID-19 pandemic, the United States Centers for Disease Control and Prevention provided strategies, such as extended use and reuse, to preserve N95 filtering facepiece respirators (FFR). We aimed to assess the prevalence of N95 FFR contamination with SARS-CoV-2 among healthcare personnel (HCP) in the Emergency Department (ED).
Design:
Real-world, prospective, multicenter cohort study. N95 FFR contamination (primary outcome) was measured by real-time quantitative polymerase chain reaction. Multiple logistic regression was used to assess factors associated with contamination.
Setting:
Six academic medical centers.
Participants:
ED HCP who practiced N95 FFR reuse and extended use during the COVID-19 pandemic between April 2021 and July 2022.
Primary exposure:
Total number of COVID-19-positive patients treated.
Results:
Two-hundred forty-five N95 FFRs were tested. Forty-four N95 FFRs (18.0%, 95% CI 13.4, 23.3) were contaminated with SARS-CoV-2 RNA. The number of patients seen with COVID-19 was associated with N95 FFR contamination (adjusted odds ratio, 2.3 [95% CI 1.5, 3.6]). Wearing either surgical masks or face shields over FFRs was not associated with FFR contamination, and FFR contamination prevalence was high when using these adjuncts [face shields: 25% (16/64), surgical masks: 22% (23/107)].
Conclusions:
Exposure to patients with known COVID-19 was independently associated with N95 FFR contamination. Face shields and overlying surgical masks were not associated with N95 FFR contamination. N95 FFR reuse and extended use should be avoided due to the increased risk of contact exposure from contaminated FFRs.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
The COVID-19 has had major direct (e.g., deaths) and indirect (e.g., social inequities) effects in the United States. While the public health response to the epidemic featured some important successes (e.g., universal masking ,and rapid development and approval of vaccines and therapeutics), there were systemic failures (e.g., inadequate public health infrastructure) that overshadowed these successes. Key deficiency in the U.S. response were shortages of personal protective equipment (PPE) and supply chain deficiencies. Recommendations are provided for mitigating supply shortages and supply chain failures in healthcare settings in future pandemics. Some key recommendations for preventing shortages of essential components of infection control and prevention include increasing the stockpile of PPE in the U.S. National Strategic Stockpile, increased transparency of the Stockpile, invoking the Defense Production Act at an early stage, and rapid review and authorization by FDA/EPA/OSHA of non-U.S. approved products. Recommendations are also provided for mitigating shortages of diagnostic testing, medications and medical equipment.
Throughout the COVID-19 pandemic, many areas in the United States experienced healthcare personnel (HCP) shortages tied to a variety of factors. Infection prevention programs, in particular, faced increasing workload demands with little opportunity to delegate tasks to others without specific infectious diseases or infection control expertise. Shortages of clinicians providing inpatient care to critically ill patients during the early phase of the pandemic were multifactorial, largely attributed to increasing demands on hospitals to provide care to patients hospitalized with COVID-19 and furloughs.1 HCP shortages and challenges during later surges, including the Omicron variant-associated surges, were largely attributed to HCP infections and associated work restrictions during isolation periods and the need to care for family members, particularly children, with COVID-19. Additionally, the detrimental physical and mental health impact of COVID-19 on HCP has led to attrition, which further exacerbates shortages.2 Demands increased in post-acute and long-term care (PALTC) settings, which already faced critical staffing challenges difficulty with recruitment, and high rates of turnover. Although individual healthcare organizations and state and federal governments have taken actions to mitigate recurring shortages, additional work and innovation are needed to develop longer-term solutions to improve healthcare workforce resiliency. The critical role of those with specialized training in infection prevention, including healthcare epidemiologists, was well-demonstrated in pandemic preparedness and response. The COVID-19 pandemic underscored the need to support growth in these fields.3 This commentary outlines the need to develop the US healthcare workforce in preparation for future pandemics.
Throughout history, pandemics and their aftereffects have spurred society to make substantial improvements in healthcare. After the Black Death in 14th century Europe, changes were made to elevate standards of care and nutrition that resulted in improved life expectancy.1 The 1918 influenza pandemic spurred a movement that emphasized public health surveillance and detection of future outbreaks and eventually led to the creation of the World Health Organization Global Influenza Surveillance Network.2 In the present, the COVID-19 pandemic exposed many of the pre-existing problems within the US healthcare system, which included (1) a lack of capacity to manage a large influx of contagious patients while simultaneously maintaining routine and emergency care to non-COVID patients; (2) a “just in time” supply network that led to shortages and competition among hospitals, nursing homes, and other care sites for essential supplies; and (3) longstanding inequities in the distribution of healthcare and the healthcare workforce. The decades-long shift from domestic manufacturing to a reliance on global supply chains has compounded ongoing gaps in preparedness for supplies such as personal protective equipment and ventilators. Inequities in racial and socioeconomic outcomes highlighted during the pandemic have accelerated the call to focus on diversity, equity, and inclusion (DEI) within our communities. The pandemic accelerated cooperation between government entities and the healthcare system, resulting in swift implementation of mitigation measures, new therapies and vaccinations at unprecedented speeds, despite our fragmented healthcare delivery system and political divisions. Still, widespread misinformation or disinformation and political divisions contributed to eroded trust in the public health system and prevented an even uptake of mitigation measures, vaccines and therapeutics, impeding our ability to contain the spread of the virus in this country.3 Ultimately, the lessons of COVID-19 illustrate the need to better prepare for the next pandemic. Rising microbial resistance, emerging and re-emerging pathogens, increased globalization, an aging population, and climate change are all factors that increase the likelihood of another pandemic.4
Two rapid methods for the decomposition and chemical analysis of clays were adapted for use with 20–40-mg size samples, typical amounts of ultrafine products (<0.5-μm diameter) obtained by modern separation methods for clay minerals. The results of these methods were compared with those of “classical” rock analyses. The two methods consisted of mixed lithium metaborate fusion and heated decomposition with HF in a closed vessel. The latter technique was modified to include subsequent evaporation with concentrated H2SO4 and re-solution in HCl, which reduced the interference of the fluoride ion in the determination of Al, Fe, Ca, Mg, Na, and K. Results from the two methods agree sufficiently well with those of the “classical” techniques to minimize error in the calculation of clay mineral structural formulae. Representative maximum variations, in atoms per unit formula of the smectite type based on 22 negative charges, are 0.09 for Si, 0.03 for Al, 0.015 for Fe, 0.07 for Mg, 0.03 for Na, and 0.01 for K.
Preoperatively, the patient will transition from different depths of anesthesia, including the levels of sedation, to general anesthesia (GA). Sedation is a continuum of symptoms that range from minimal symptoms of anxiolysis to symptoms of moderate and deep sedation. Moderate sedation is defined by the patient remaining asleep, but being easily arousable. Deep sedation is achieved when the patient is only arousable to painful stimulation. GA refers to medically induced loss of consciousness with concurrent loss of protective reflexes and skeletal muscle relaxation. GA is most commonly achieved via induction with intravenous sedatives and analgesics, followed by maintenance of volatile anesthetics [1]. Table 9.1 lists the depths of anesthesia and associated characteristics.
We present the Cosmological Double Radio Active Galactic Nuclei (CosmoDRAGoN) project: a large suite of simulated AGN jets in cosmological environments. These environments sample the intra-cluster media of galaxy clusters that form in cosmological smooth particle hydrodynamics (SPH) simulations, which we then use as inputs for grid-based hydrodynamic simulations of radio jets. Initially conical jets are injected with a range of jet powers, speeds (both relativistic and non-relativistic), and opening angles; we follow their collimation and propagation on scales of tens to hundreds of kiloparsecs, and calculate spatially resolved synthetic radio spectra in post-processing. In this paper, we present a technical overview of the project, and key early science results from six representative simulations which produce radio sources with both core- (Fanaroff-Riley Type I) and edge-brightened (Fanaroff-Riley Type II) radio morphologies. Our simulations highlight the importance of accurate representation of both jets and environments for radio morphology, radio spectra, and feedback the jets provide to their surroundings.
Studies on humans that exploit contemporary data-intensive, high-throughput ‘omic’ assay technologies, such as genomics, transcriptomics, proteomics and metabolomics, have unequivocally revealed that humans differ greatly at the molecular level. These differences, which are compounded by each individual’s distinct behavioral and environmental exposures, impact individual responses to health interventions such as diet and drugs. Questions about the best way to tailor health interventions to individuals based on their nuanced genomic, physiologic, behavioral, etc. profiles have motivated the current emphasis on ‘precision’ medicine. This review’s purpose is to describe how the design and execution of N-of-1 (or personalized) multivariate clinical trials can advance the field. Such trials focus on individual responses to health interventions from a whole-person perspective, leverage emerging health monitoring technologies, and can be used to address the most relevant questions in the precision medicine era. This includes how to validate biomarkers that may indicate appropriate activity of an intervention as well as how to identify likely beneficial interventions for an individual. We also argue that multivariate N-of-1 and aggregated N-of-1 trials are ideal vehicles for advancing biomedical and translational science in the precision medicine era since the insights gained from them can not only shed light on how to treat or prevent diseases generally, but also provide insight into how to provide real-time care to the very individuals who are seeking attention for their health concerns in the first place.
Childhood adversities (CAs) predict heightened risks of posttraumatic stress disorder (PTSD) and major depressive episode (MDE) among people exposed to adult traumatic events. Identifying which CAs put individuals at greatest risk for these adverse posttraumatic neuropsychiatric sequelae (APNS) is important for targeting prevention interventions.
Methods
Data came from n = 999 patients ages 18–75 presenting to 29 U.S. emergency departments after a motor vehicle collision (MVC) and followed for 3 months, the amount of time traditionally used to define chronic PTSD, in the Advancing Understanding of Recovery After Trauma (AURORA) study. Six CA types were self-reported at baseline: physical abuse, sexual abuse, emotional abuse, physical neglect, emotional neglect and bullying. Both dichotomous measures of ever experiencing each CA type and numeric measures of exposure frequency were included in the analysis. Risk ratios (RRs) of these CA measures as well as complex interactions among these measures were examined as predictors of APNS 3 months post-MVC. APNS was defined as meeting self-reported criteria for either PTSD based on the PTSD Checklist for DSM-5 and/or MDE based on the PROMIS Depression Short-Form 8b. We controlled for pre-MVC lifetime histories of PTSD and MDE. We also examined mediating effects through peritraumatic symptoms assessed in the emergency department and PTSD and MDE assessed in 2-week and 8-week follow-up surveys. Analyses were carried out with robust Poisson regression models.
Results
Most participants (90.9%) reported at least rarely having experienced some CA. Ever experiencing each CA other than emotional neglect was univariably associated with 3-month APNS (RRs = 1.31–1.60). Each CA frequency was also univariably associated with 3-month APNS (RRs = 1.65–2.45). In multivariable models, joint associations of CAs with 3-month APNS were additive, with frequency of emotional abuse (RR = 2.03; 95% CI = 1.43–2.87) and bullying (RR = 1.44; 95% CI = 0.99–2.10) being the strongest predictors. Control variable analyses found that these associations were largely explained by pre-MVC histories of PTSD and MDE.
Conclusions
Although individuals who experience frequent emotional abuse and bullying in childhood have a heightened risk of experiencing APNS after an adult MVC, these associations are largely mediated by prior histories of PTSD and MDE.
Understanding the quality of seed dispersal effectiveness of frugivorous species can elucidate how endozoochory structures tropical forests. Large seeds, containing more resources for growth, and gut passage by frugivores, which remove seed pulp, both typically enhance the speed and probability of germination of tropical seeds. However, the interaction of seed size and gut passage has not been well studied. We assessed the role of two species of toucans (Ramphastos spp.) in seed germination of the tropical tree Eugenia uniflora, which produces seeds that vary considerably in size (3.7–14.3 mm), using 151 control and 137 regurgitated seeds in germination trials. We found that toucan regurgitation did not increase germination success, although 93.4% germinated compared to 76.8% of control seeds; however, larger seeds germinated more often at faster rates. Although only marginally significant, germination rates were 3.6× faster when seeds were both large and regurgitated by toucans, demonstrating that toucan regurgitation can disproportionally benefit larger E. uniflora seeds. As tropical forests are increasingly disturbed and fragmented by human activities, the ability of toucans to continue providing seed dispersal services to degraded habitats may be vital to the persistence of many tropical plants that contain larger seeds and depend on larger dispersers.
As part of surveillance of snail-borne trematodiasis in Knowsley Safari (KS), Prescot, United Kingdom, a collection was made in July 2021 of various planorbid (n = 173) and lymnaeid (n = 218) snails. These were taken from 15 purposely selected freshwater habitats. In the laboratory emergent trematode cercariae, often from single snails, were identified by morphology with a sub-set, of those most accessible, later characterized by cytochrome oxidase subunit 1 (cox1) DNA barcoding. Two schistosomatid cercariae were of special note in the context of human cercarial dermatitis (HCD), Bilharziella polonica emergent from Planorbarius corneus and Trichobilharzia spp. emergent from Ampullacaena balthica. The former schistosomatid was last reported in the United Kingdom over 50 years ago. From cox1 analyses, the latter likely consisted of two taxa, Trichobilharzia anseri, a first report in the United Kingdom, and a hitherto unnamed genetic lineage having some affiliation with Trichobilharzia longicauda. The chronobiology of emergent cercariae from P. corneus was assessed, with the vertical swimming rate of B. polonica measured. We provide a brief risk appraisal of HCD for public activities typically undertaken within KS educational and recreational programmes.
Although pulmonary artery banding remains a useful palliation in bi-ventricular shunting lesions, single-stage repair holds several advantages. We investigate outcomes of the former approach in high-risk patients.
Methods:
Retrospective cohort study including all pulmonary artery banding procedures over 9 years, excluding single ventricle physiology and left ventricular training.
Results:
Banding was performed in 125 patients at a median age of 41 days (2–294) and weight of 3.4 kg (1.8–7.32). Staged repair was undertaken for significant co-morbidity in 81 (64.8%) and anatomical complexity in 44 (35.2%). The median hospital stay was 14 days (interquartile range 8–33.5) and 14 patients (11.2%) required anatomical repair before discharge. Nine patients died during the initial admission (hospital mortality 7.2 %) and five following discharge (inter-stage mortality 4.8%). Of 105 banded patients who survived, 19 (18.1%) needed inter-stage re-admission and 18 (14.4%) required unplanned re-intervention. Full repair was performed in 93 (74.4%) at a median age of 13 months (3.1–49.9) and weight of 8.5 kg (3.08–16.8). Prior banding, 54% were below the 0.4th weight centile, but only 28% remained so at repair. Post-repair, 5/93 (5.4%) developed heart block requiring permanent pacemaker, and 11/93 (11.8%) required unplanned re-intervention. The post-repair mortality (including repairs during the initial admission) was 6/93 (6.5%), with overall mortality of the staged approach 13.6% (17/125).
Conclusions:
In a cohort with a high incidence of co-morbidity, pulmonary artery banding is associated with a significant risk of re-intervention and mortality. Weight gain improves after banding, but heart block, re-intervention, and mortality remain frequent following repair.
Background: In refractory epilepsy patients with possible autoimmune-associated epilepsy (AAE) but negative antibody testing(-AB), immunotherapy trials (IMT) may still be pursued.The value of (IMT) in such patients remains unclear. For this reason, we reviewed their immunotherapy responses. Methods: Retrospective review of epilepsy patients admitted to the Epilepsy Unit between (2018-2021) who received (IMT). All had (-AB) and received immunotherapy (methylprednisolone (IVMP)-immune globulin (IVIg)-plasma exchange (PLEX)- rituximab).We considered responders when their seizure reduction was ≥ 50%. Results: 14 patients identified. Of them, 50%(n=7) females. Median age (43.5 year. IQR= 28.75-63.25). All refractory to ≥ 2 anti-seizure medications (ASM). Median epilepsy onset was (39.5 years. IQR=23.75-60.25).Median time from diagnosis until received immunotherapy was (15.5 months. IQR=12.75 -21.75). Patients received either IVIG+IVMP (35.7%, n=5) or IVIG alone (28.5%, n=4) or IVIG+IVMP+PLEX (21.4%, n=3) or IVMP alone (7.1%, n=1) or IVIG+IVMP+rituximab (7.1%, n=1). Median follow-up was 25 months.Although early immunotherapy responses were common, sustained response to immunotherapy at last follow-up was only in 21.4% (n=3). Factors confounding determination of immunotherapy efficacy were present in all responders (e.g:concurrent changes in ASM). Conclusions: Our findings suggest that (IMT) in patients with suspected (AAE) but with (-AB) are largely unsuccessful. This suggests an insufficient therapeutic effect after (IMT) or alternatively,non-immune-mediated mechanisms causing this type of epilepsy. Critical evaluations of (IMT)in such cases are needed.
We use the RIOTS4 sample of SMC field OB stars to determine the origin of massive runaways in this low-metallicity galaxy using Gaia proper motions, together with stellar masses obtained from RIOTS4 data. These data allow us to estimate the relative contributions of stars accelerated by the dynamical ejection vs binary supernova mechanisms, since dynamical ejection favors faster, more massive runaways, while SN ejection favors the opposite trend. In addition, we use the frequencies of classical OBe stars, high-mass X-ray binaries, and non-compact binaries to discriminate between the mechanisms. Our results show that the dynamical mechanism dominates by a factor of 2 – 3. This also implies a significant contribution from two-step acceleration that occurs when dynamically ejected binaries are followed by SN kicks. We update our published quantitative results from Gaia DR2 proper motions with new data from DR3.
Many short gamma-ray bursts (GRBs) originate from binary neutron star mergers, and there are several theories that predict the production of coherent, prompt radio signals either prior, during, or shortly following the merger, as well as persistent pulsar-like emission from the spin-down of a magnetar remnant. Here we present a low frequency (170–200 MHz) search for coherent radio emission associated with nine short GRBs detected by the Swift and/or Fermi satellites using the Murchison Widefield Array (MWA) rapid-response observing mode. The MWA began observing these events within 30–60 s of their high-energy detection, enabling us to capture any dispersion delayed signals emitted by short GRBs for a typical range of redshifts. We conducted transient searches at the GRB positions on timescales of 5 s, 30 s, and 2 min, resulting in the most constraining flux density limits on any associated transient of 0.42, 0.29, and 0.084 Jy, respectively. We also searched for dispersed signals at a temporal and spectral resolution of 0.5 s and 1.28 MHz, but none were detected. However, the fluence limit of 80–100 Jy ms derived for GRB 190627A is the most stringent to date for a short GRB. Assuming the formation of a stable magnetar for this GRB, we compared the fluence and persistent emission limits to short GRB coherent emission models, placing constraints on key parameters including the radio emission efficiency of the nearly merged neutron stars ($\epsilon_r\lesssim10^{-4}$), the fraction of magnetic energy in the GRB jet ($\epsilon_B\lesssim2\times10^{-4}$), and the radio emission efficiency of the magnetar remnant ($\epsilon_r\lesssim10^{-3}$). Comparing the limits derived for our full GRB sample (along with those in the literature) to the same emission models, we demonstrate that our fluence limits only place weak constraints on the prompt emission predicted from the interaction between the relativistic GRB jet and the interstellar medium for a subset of magnetar parameters. However, the 30-min flux density limits were sensitive enough to theoretically detect the persistent radio emission from magnetar remnants up to a redshift of $z\sim0.6$. Our non-detection of this emission could imply that some GRBs in the sample were not genuinely short or did not result from a binary neutron star merger, the GRBs were at high redshifts, these mergers formed atypical magnetars, the radiation beams of the magnetar remnants were pointing away from Earth, or the majority did not form magnetars but rather collapse directly into black holes.
Racial and ethnic groups in the USA differ in the prevalence of posttraumatic stress disorder (PTSD). Recent research however has not observed consistent racial/ethnic differences in posttraumatic stress in the early aftermath of trauma, suggesting that such differences in chronic PTSD rates may be related to differences in recovery over time.
Methods
As part of the multisite, longitudinal AURORA study, we investigated racial/ethnic differences in PTSD and related outcomes within 3 months after trauma. Participants (n = 930) were recruited from emergency departments across the USA and provided periodic (2 weeks, 8 weeks, and 3 months after trauma) self-report assessments of PTSD, depression, dissociation, anxiety, and resilience. Linear models were completed to investigate racial/ethnic differences in posttraumatic dysfunction with subsequent follow-up models assessing potential effects of prior life stressors.
Results
Racial/ethnic groups did not differ in symptoms over time; however, Black participants showed reduced posttraumatic depression and anxiety symptoms overall compared to Hispanic participants and White participants. Racial/ethnic differences were not attenuated after accounting for differences in sociodemographic factors. However, racial/ethnic differences in depression and anxiety were no longer significant after accounting for greater prior trauma exposure and childhood emotional abuse in White participants.
Conclusions
The present findings suggest prior differences in previous trauma exposure partially mediate the observed racial/ethnic differences in posttraumatic depression and anxiety symptoms following a recent trauma. Our findings further demonstrate that racial/ethnic groups show similar rates of symptom recovery over time. Future work utilizing longer time-scale data is needed to elucidate potential racial/ethnic differences in long-term symptom trajectories.