We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
During the COVID-19 pandemic, the United States Centers for Disease Control and Prevention provided strategies, such as extended use and reuse, to preserve N95 filtering facepiece respirators (FFR). We aimed to assess the prevalence of N95 FFR contamination with SARS-CoV-2 among healthcare personnel (HCP) in the Emergency Department (ED).
Design:
Real-world, prospective, multicenter cohort study. N95 FFR contamination (primary outcome) was measured by real-time quantitative polymerase chain reaction. Multiple logistic regression was used to assess factors associated with contamination.
Setting:
Six academic medical centers.
Participants:
ED HCP who practiced N95 FFR reuse and extended use during the COVID-19 pandemic between April 2021 and July 2022.
Primary exposure:
Total number of COVID-19-positive patients treated.
Results:
Two-hundred forty-five N95 FFRs were tested. Forty-four N95 FFRs (18.0%, 95% CI 13.4, 23.3) were contaminated with SARS-CoV-2 RNA. The number of patients seen with COVID-19 was associated with N95 FFR contamination (adjusted odds ratio, 2.3 [95% CI 1.5, 3.6]). Wearing either surgical masks or face shields over FFRs was not associated with FFR contamination, and FFR contamination prevalence was high when using these adjuncts [face shields: 25% (16/64), surgical masks: 22% (23/107)].
Conclusions:
Exposure to patients with known COVID-19 was independently associated with N95 FFR contamination. Face shields and overlying surgical masks were not associated with N95 FFR contamination. N95 FFR reuse and extended use should be avoided due to the increased risk of contact exposure from contaminated FFRs.
Background: Our prior six-year review (n=2165) revealed 24% of patients undergoing posterior decompression surgeries (laminectomy or discectomy) sought emergency department (ED) care within three months post-surgery. We established an integrated Spine Assessment Clinic (SAC) to enhance patient outcomes and minimize unnecessary ED visits through pre-operative education, targeted QI interventions, and early post-operative follow-up. Methods: We reviewed 13 months of posterior decompression data (n=205) following SAC implementation. These patients received individualized, comprehensive pre-operative education and follow-up phone calls within 7 days post-surgery. ED visits within 90 days post-surgery were tracked using provincial databases and compared to our pre-SAC implementation data. Results: Out of 205 patients, 24 (11.6%) accounted for 34 ED visits within 90 days post-op, showing a significant reduction in ED visits from 24% to 11.6%, and decreased overall ED utilization from 42.1% to 16.6% (when accounting for multiple visits by the same patient). Early interventions including wound monitoring, outpatient bloodwork, and prescription adjustments for pain management, helped mitigate ED visits. Patient satisfaction surveys (n=62) indicated 92% were “highly satisfied” and 100% would recommend the SAC. Conclusions: The SAC reduced ED visits after posterior decompression surgery by over 50%, with pre-operative education, focused QI initiatives, and its individualized, proactive approach.
We investigate the evolution of active galactic nucleus jets on kiloparsec-scales due to their interaction with the clumpy interstellar medium (ISM) of the host galaxy and, subsequently, the surrounding circumgalactic environment. Hydrodynamic simulations of this jet–environment interaction are presented for a range of jet kinetic powers, peak densities of the multiphase ISM, and scale radii of the larger-scale environment – characteristic of either a galaxy cluster or poor group. Synthetic radio images are generated by considering the combination of synchrotron radiation from the jet plasma and free-free absorption from the multiphase ISM. We find that jet propagation is slowed by interactions with a few very dense clouds in the host galaxy ISM, producing asymmetries in lobe length and brightness which persist to scales of tens of kpc for poor group environments. The classification of kiloparsec-scale jets is highly dependent on surface brightness sensitivity and resolution. Our simulations of young active sources can appear as restarted sources, showing double-double lobe morphology, high core prominence (CP $\gt 0.1$), and the expected radio spectra for both the inner- and outer-lobe components. We qualitatively reproduce the observed inverse correlation between peak frequency and source size and find that the peak frequency of the integrated radio spectrum depends on ISM density but not the jet power. Spectral turnover in resolved young radio sources therefore provides a new probe of the ISM.
Background: Transcranial doppler ultrasound (TCD) in a pediatric neurocritical setting can determine cerebral hemodynamics by assessing the blood flow velocity in main cerebral arteries. In large vessel occlusions (LVO) that require endovascular thrombectomy (EVT), TCD can monitor recanalization and arterial re-occlusion. We describe one case in a previously healthy 13-year-old girl with a right M1 middle cerebral artery occlusion. Methods: Analysis was done via a retrospective case review. Results: Our patient underwent a successful endovascular thrombectomy (EVT) six hours after symptom onset. Follow up TCDs done at 4, 8, and 24 hours showed stable peak systolic velocities (PSV) on the narrowing of right M1 ranging from 245 to 270 cm/s with stable pre-stenotic PSV around 110 cm/s, indicating focal and stable narrowing of M1 without reocclusion. No high transient signals (HITS) were identified on sub 10 minute TCDs. An urgent echocardiogram revealed a bicuspid aortic valve with vegetations, with later confirmation of infective endocarditis. The patient made an impressive recovery with only mild deficits. Conclusions: TCD can be an effective tool in a pediatric neurocritical setting in guiding initial recanalization after EVT and monitoring for arterial re-occlusion, HITS and hyperperfusion. TCD monitoring also decreases the amount of radiation exposure via CTA.
Multicenter clinical trials are essential for evaluating interventions but often face significant challenges in study design, site coordination, participant recruitment, and regulatory compliance. To address these issues, the National Institutes of Health’s National Center for Advancing Translational Sciences established the Trial Innovation Network (TIN). The TIN offers a scientific consultation process, providing access to clinical trial and disease experts who provide input and recommendations throughout the trial’s duration, at no cost to investigators. This approach aims to improve trial design, accelerate implementation, foster interdisciplinary teamwork, and spur innovations that enhance multicenter trial quality and efficiency. The TIN leverages resources of the Clinical and Translational Science Awards (CTSA) program, complementing local capabilities at the investigator’s institution. The Initial Consultation process focuses on the study’s scientific premise, design, site development, recruitment and retention strategies, funding feasibility, and other support areas. As of 6/1/2024, the TIN has provided 431 Initial Consultations to increase efficiency and accelerate trial implementation by delivering customized support and tailored recommendations. Across a range of clinical trials, the TIN has developed standardized, streamlined, and adaptable processes. We describe these processes, provide operational metrics, and include a set of lessons learned for consideration by other trial support and innovation networks.
Self-working card tricks have been a staple for budding magicians and entire books have been written about them, e.g. Fulves [1]. Lately selfworking card tricks have become common fodder for social media. In this note, we generalise two such card tricks (from [2] and [3]) that are based on the Australian shuffle.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
Despite the influence of key figures like Henry Sigerist and the Rockefeller Foundation, social medicine achieved a formal presence at only a handful of medical schools in the US, partly reflecting the political context in which “social medicine” was often heard as “socialized medicine.” Work that might otherwise have been called social medicine had to pass under other names. Does “social medicine” in the US only include those who self-identified with social medicine or does it include people who worked in the spirit of social medicine? Beginning with the recognized work of Sigerist and the Rockefeller, we then examine several Black social theorists whose work can now be recognized as social medicine. The Cold War context challenged would-be proponents of social medicine but different threads endured. The first, clinically oriented, focused on community health. The second, based in academic departments, applied the interpretive social sciences to explore the interspace between the clinical and the social. These threads converged in the 1990s and 2000s in new forms of social medicine considered as healthcare committed to social justice and health equity.
Individuals with long-term physical health conditions (LTCs) experience higher rates of depression and anxiety. Conventional self-report measures do not distinguish distress related to LTCs from primary mental health disorders. This difference is important as treatment protocols differ. We developed a transdiagnostic self-report measure of illness-related distress, applicable across LTCs.
Methods
The new Illness-Related Distress (IRD) scale was developed through thematic coding of interviews, systematic literature search, think-aloud interviews with patients and healthcare providers, and expert-consensus meetings. An internet sample (n = 1,398) of UK-based individuals with LTCs completed the IRD scale for psychometric analysis. We randomly split the sample (1:1) to conduct: (1) an exploratory factor analysis (EFA; n = 698) for item reduction, and (2) iterative confirmatory factor analysis (CFA; n = 700) and exploratory structural equation modeling (ESEM). Here, further item reduction took place to generate a final version. Measurement invariance, internal consistency, convergent, test–retest reliability, and clinical cut-points were assessed.
Results
EFA suggested a 2-factor structure for the IRD scale, subsequently confirmed by iteratively comparing unidimensional, lower order, and bifactor CFAs and ESEMs. A lower-order correlated 2-factor CFA model (two 7-item subscales: intrapersonal distress and interpersonal distress) was favored and was structurally invariant for gender. Subscales demonstrated excellent internal consistency, very good test–retest reliability, and good convergent validity. Clinical cut points were identified (intrapersonal = 15, interpersonal = 12).
Conclusion
The IRD scale is the first measure that captures transdiagnostic distress. It may aid assessment within clinical practice and research related to psychological adjustment and distress in LTCs.
Across Europe there are numerous examples of recent linkages between universities and Islamic seminaries. In Germany the federal 'top-down' experiment, now over ten years old, of establishing departments of Islamic theology in five universities has now recruited over two thousand students, many of whom will end up teaching confessional Islam religious education in schools. In the UK, local partnerships have been developed at under- and postgraduate level between e.g. Warwick, Birmingham and Middlesex universities and Islamic seminaries representing a range of Islamic traditions. Similar experiences are being developed on a smaller scale in other countries. These developments, which have taken place against a backdrop of state pressure to 'integrate' Islam and address 'radicalisation', challenge university traditions of 'scientific' approaches to the study of Islam as well as the confessional expectations of faith-based Islamic theological training. By looking more closely at the developing experience in Germany and Britain and selected other countries this volume explores how the two approaches are finding ways of creative cooperation.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
Similarity judgments of three-dimensional stimuli were simulated, with the hypothetical subject attending to only some dimensions of stimulus variation (i.e., “subsampling”) on each trial. Recovery of the stimulus configuration by non-metric multidimensional scaling was investigated as a function of subsampling, the amount of random error in the judgments, and the number of stimuli being scaled.
It was found that: (1) dimensions to which the subject often attends were well recovered even when dimensions seldom attended to were not, and (2) measures of recovery based on interpoint distances were inadequate. Several previous Monte Carlo studies were evaluated in light of the results.
This study explored the prospective use of the Ages and Stages Questionnaires-3 in follow-up after cardiac surgery.
Materials and Method:
For children undergoing cardiac surgery at 5 United Kingdom centres, the Ages and Stages Questionnaires-3 were administered 6 months and 2 years later, with an outcome based on pre-defined cut-points: Red = 1 or more domain scores >2 standard deviations below the normative mean, Amber = 1 or more domain scores 1–2 standard deviations below the normal range based on the manual, Green = scores within the normal range based on the manual.
Results:
From a cohort of 554 children <60 months old at surgery, 306 participated in the postoperative assessment: 117 (38.3%) were scored as Green, 57 (18.6%) as Amber, and 132 (43.1%) as Red. Children aged 6 months at first assessment (neonatal surgery) were likely to score Red (113/124, 85.6%) compared to older age groups (n = 32/182, 17.6%). Considering risk factors of congenital heart complexity, univentricular status, congenital comorbidity, and child age in a logistic regression model for the outcome of Ages and Stages score Red, only younger age was significant (p < 0.001). 87 children had surgery in infancy and were reassessed as toddlers. Of these, 43 (49.2%) improved, 30 (34.5%) stayed the same, and 13 (16.1%) worsened. Improved scores were predominantly in those who had a first assessment at 6 months old.
Discussion:
The Ages and Stages Questionnaires results are most challenging to interpret in young babies of 6 months old who are affected by complex CHD.
Cannabis use and familial vulnerability to psychosis have been associated with social cognition deficits. This study examined the potential relationship between cannabis use and cognitive biases underlying social cognition and functioning in patients with first episode psychosis (FEP), their siblings, and controls.
Methods
We analyzed a sample of 543 participants with FEP, 203 siblings, and 1168 controls from the EU-GEI study using a correlational design. We used logistic regression analyses to examine the influence of clinical group, lifetime cannabis use frequency, and potency of cannabis use on cognitive biases, accounting for demographic and cognitive variables.
Results
FEP patients showed increased odds of facial recognition processing (FRP) deficits (OR = 1.642, CI 1.123–2.402) relative to controls but not of speech illusions (SI) or jumping to conclusions (JTC) bias, with no statistically significant differences relative to siblings. Daily and occasional lifetime cannabis use were associated with decreased odds of SI (OR = 0.605, CI 0.368–0.997 and OR = 0.646, CI 0.457–0.913 respectively) and JTC bias (OR = 0.625, CI 0.422–0.925 and OR = 0.602, CI 0.460–0.787 respectively) compared with lifetime abstinence, but not with FRP deficits, in the whole sample. Within the cannabis user group, low-potency cannabis use was associated with increased odds of SI (OR = 1.829, CI 1.297–2.578, FRP deficits (OR = 1.393, CI 1.031–1.882, and JTC (OR = 1.661, CI 1.271–2.171) relative to high-potency cannabis use, with comparable effects in the three clinical groups.
Conclusions
Our findings suggest increased odds of cognitive biases in FEP patients who have never used cannabis and in low-potency users. Future studies should elucidate this association and its potential implications.
This editorial considers the value and nature of academic psychiatry by asking what defines the specialty and psychiatrists as academics. We frame academic psychiatry as a way of thinking that benefits clinical services and discuss how to inspire the next generation of academics.
Coastal wetlands are hotspots of carbon sequestration, and their conservation and restoration can help to mitigate climate change. However, there remains uncertainty on when and where coastal wetland restoration can most effectively act as natural climate solutions (NCS). Here, we synthesize current understanding to illustrate the requirements for coastal wetland restoration to benefit climate, and discuss potential paths forward that address key uncertainties impeding implementation. To be effective as NCS, coastal wetland restoration projects will accrue climate cooling benefits that would not occur without management action (additionality), will be implementable (feasibility) and will persist over management-relevant timeframes (permanence). Several issues add uncertainty to understanding if these minimum requirements are met. First, coastal wetlands serve as both a landscape source and sink of carbon for other habitats, increasing uncertainty in additionality. Second, coastal wetlands can potentially migrate outside of project footprints as they respond to sea-level rise, increasing uncertainty in permanence. To address these first two issues, a system-wide approach may be necessary, rather than basing cooling benefits only on changes that occur within project boundaries. Third, the need for NCS to function over management-relevant decadal timescales means methane responses may be necessary to include in coastal wetland restoration planning and monitoring. Finally, there is uncertainty on how much data are required to justify restoration action. We summarize the minimum data required to make a binary decision on whether there is a net cooling benefit from a management action, noting that these data are more readily available than the data required to quantify the magnitude of cooling benefits for carbon crediting purposes. By reducing uncertainty, coastal wetland restoration can be implemented at the scale required to significantly contribute to addressing the current climate crisis.
Tidal flooding occurs when coastal water levels exceed impact-based flood thresholds due to tides alone, under average weather conditions. Transitions to tidal flood regimes are already underway for nuisance flood severities in harbours and bays and expected for higher severities in coming decades. In the first such regional assessment, we show that the same transition to tidally forced floods can also be expected to occur in Australian estuaries with less than 0.1 m further sea-level rise. Flood thresholds that historically used to only be exceeded under the combined effects of riverine (freshwater) and coastal (salt water) influences will then occur due to high tides alone. Once this tidal flooding emerges, it is projected to become chronic within two decades. Locations most at-risk of the emergence of tidal flooding and subsequent establishment of chronic flood regimes are those just inside estuary entrances. These locations are exemplified by low freeboard, the vertical distance between a flood threshold and a typical high tide level. We use a freeboard-based analysis to estimate the sea-level rise required for impacts associated with official flood thresholds to occur due to tides alone. The resultant tide-only flood frequency estimates provide a lower bound for future flood rates.
Military Servicemembers and Veterans are at elevated risk for suicide, but rarely self-identify to their leaders or clinicians regarding their experience of suicidal thoughts. We developed an algorithm to identify posts containing suicide-related content on a military-specific social media platform.
Methods
Publicly-shared social media posts (n = 8449) from a military-specific social media platform were reviewed and labeled by our team for the presence/absence of suicidal thoughts and behaviors and used to train several machine learning models to identify such posts.
Results
The best performing model was a deep learning (RoBERTa) model that incorporated post text and metadata and detected the presence of suicidal posts with relatively high sensitivity (0.85), specificity (0.96), precision (0.64), F1 score (0.73), and an area under the precision-recall curve of 0.84. Compared to non-suicidal posts, suicidal posts were more likely to contain explicit mentions of suicide, descriptions of risk factors (e.g. depression, PTSD) and help-seeking, and first-person singular pronouns.
Conclusions
Our results demonstrate the feasibility and potential promise of using social media posts to identify at-risk Servicemembers and Veterans. Future work will use this approach to deliver targeted interventions to social media users at risk for suicide.