We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It remains unclear which individuals with subthreshold depression benefit most from psychological intervention, and what long-term effects this has on symptom deterioration, response and remission.
Aims
To synthesise psychological intervention benefits in adults with subthreshold depression up to 2 years, and explore participant-level effect-modifiers.
Method
Randomised trials comparing psychological intervention with inactive control were identified via systematic search. Authors were contacted to obtain individual participant data (IPD), analysed using Bayesian one-stage meta-analysis. Treatment–covariate interactions were added to examine moderators. Hierarchical-additive models were used to explore treatment benefits conditional on baseline Patient Health Questionnaire 9 (PHQ-9) values.
Results
IPD of 10 671 individuals (50 studies) could be included. We found significant effects on depressive symptom severity up to 12 months (standardised mean-difference [s.m.d.] = −0.48 to −0.27). Effects could not be ascertained up to 24 months (s.m.d. = −0.18). Similar findings emerged for 50% symptom reduction (relative risk = 1.27–2.79), reliable improvement (relative risk = 1.38–3.17), deterioration (relative risk = 0.67–0.54) and close-to-symptom-free status (relative risk = 1.41–2.80). Among participant-level moderators, only initial depression and anxiety severity were highly credible (P > 0.99). Predicted treatment benefits decreased with lower symptom severity but remained minimally important even for very mild symptoms (s.m.d. = −0.33 for PHQ-9 = 5).
Conclusions
Psychological intervention reduces the symptom burden in individuals with subthreshold depression up to 1 year, and protects against symptom deterioration. Benefits up to 2 years are less certain. We find strong support for intervention in subthreshold depression, particularly with PHQ-9 scores ≥ 10. For very mild symptoms, scalable treatments could be an attractive option.
Positive, negative and disorganised psychotic symptom dimensions are associated with clinical and developmental variables, but differing definitions complicate interpretation. Additionally, some variables have had little investigation.
Aims
To investigate associations of psychotic symptom dimensions with clinical and developmental variables, and familial aggregation of symptom dimensions, in multiple samples employing the same definitions.
Method
We investigated associations between lifetime symptom dimensions and clinical and developmental variables in two twin and two general psychosis samples. Dimension symptom scores and most other variables were from the Operational Criteria Checklist. We used logistic regression in generalised linear mixed models for combined sample analysis (n = 875 probands). We also investigated correlations of dimensions within monozygotic (MZ) twin pairs concordant for psychosis (n = 96 pairs).
Results
Higher symptom scores on all three dimensions were associated with poor premorbid social adjustment, never marrying/cohabiting and earlier age at onset, and with a chronic course, most strongly for the negative dimension. The positive dimension was also associated with Black and minority ethnicity and lifetime cannabis use; the negative dimension with male gender; and the disorganised dimension with gradual onset, lower premorbid IQ and substantial within twin-pair correlation. In secondary analysis, disorganised symptoms in MZ twin probands were associated with lower premorbid IQ in their co-twins.
Conclusions
These results confirm associations that dimensions share in common and strengthen the evidence for distinct associations of co-occurring positive symptoms with ethnic minority status, negative symptoms with male gender and disorganised symptoms with substantial familial influences, which may overlap with influences on premorbid IQ.
Background: Optimizing antimicrobial use (AU) among post-acute and long-term care (PALTC) residents is fundamental to reducing the morbidity and mortality associated with multidrug-resistant organism (MDROs), as well as unintended social consequences related to infection prevention. Data on AU in PALTC settings remains limited. The U.S. Department of Veteran Affairs (VA) provides PALTC to over 23,000 residents at 134 community living centers (CLCs) across the United States annually. Here, we describe AU in VA CLCs, assessing both class and length of therapy. Methods: Monthly AU between January 1, 2015 and December 31, 2019 was extracted from the VA Corporate Data Warehouse across 134 VA CLCs. Antimicrobials and administration routes were based on the National Healthcare Safety Network AU Option protocol for hospitals. Rates of AU were measured as the days of therapy (DOT) per 1,000 resident-days. An antimicrobial course was defined as the same drug and route administered to the same resident with a gap of ≤ three days between administrations. Course duration was measured in days. AU Rates were measured as the days of therapy (DOT) per 1,000 resident-days. Results: The most common class of antimicrobial course administered during the study period was beta-lactam/beta-lactamase inhibitor combinations (15%) followed by fluroquinolones (14%), extended-spectrum cephalosporins (12%) and glycopeptides (11%; Figure 1). Neuraminidase inhibitors had the longest median (IQR) course duration (10 (IQR 8) days), followed by tetracyclines (8 (IQR 8) days), and then folate pathway inhibitors, nitrofurans and 1st/2nd generation cephalosporins (7 (IQR 7) days). Overall, 60% of antimicrobial courses were administered orally, with fluroquinolones the most frequently administered orally (20%). From 2015 – 2019, the annual rate of total antimicrobial use across VA CLCs decreased slightly from 213.6 to 202.5 DOT/1,000 resident-days. During the 5-year study period, fluroquinolone use decreased from 27.47 to 13.36 DOTs/1,000 resident-days. First and 2nd generation cephalosporin use remained relatively stable, but 3rd or greater generation cephalosporin use increased from 14.70 to 19.21 DOTs/1,000 resident-days (Figure 2). Conclusion: The marked decrease in the use of fluoroquinolones at VA CLCs from 2015-2019 is similar to patterns observed for VA hospitals and for non-VA PALTC facilities. The overall use of antibacterial agents at VA CLCs decreased slightly during the study period, but other broad-spectrum agents such as 3rd or greater generation cephalosporins increased over the same period. The strategies used to decrease fluroquinolone use may have application for other antibiotic classes, both in VA and non-VA PALTC settings.
Disclosure: Robin Jump: Research support to my institution from Merck and Pfizer; Advisory boards for Pfizer
The COVID-19 has had major direct (e.g., deaths) and indirect (e.g., social inequities) effects in the United States. While the public health response to the epidemic featured some important successes (e.g., universal masking ,and rapid development and approval of vaccines and therapeutics), there were systemic failures (e.g., inadequate public health infrastructure) that overshadowed these successes. Key deficiency in the U.S. response were shortages of personal protective equipment (PPE) and supply chain deficiencies. Recommendations are provided for mitigating supply shortages and supply chain failures in healthcare settings in future pandemics. Some key recommendations for preventing shortages of essential components of infection control and prevention include increasing the stockpile of PPE in the U.S. National Strategic Stockpile, increased transparency of the Stockpile, invoking the Defense Production Act at an early stage, and rapid review and authorization by FDA/EPA/OSHA of non-U.S. approved products. Recommendations are also provided for mitigating shortages of diagnostic testing, medications and medical equipment.
Throughout the COVID-19 pandemic, many areas in the United States experienced healthcare personnel (HCP) shortages tied to a variety of factors. Infection prevention programs, in particular, faced increasing workload demands with little opportunity to delegate tasks to others without specific infectious diseases or infection control expertise. Shortages of clinicians providing inpatient care to critically ill patients during the early phase of the pandemic were multifactorial, largely attributed to increasing demands on hospitals to provide care to patients hospitalized with COVID-19 and furloughs.1 HCP shortages and challenges during later surges, including the Omicron variant-associated surges, were largely attributed to HCP infections and associated work restrictions during isolation periods and the need to care for family members, particularly children, with COVID-19. Additionally, the detrimental physical and mental health impact of COVID-19 on HCP has led to attrition, which further exacerbates shortages.2 Demands increased in post-acute and long-term care (PALTC) settings, which already faced critical staffing challenges difficulty with recruitment, and high rates of turnover. Although individual healthcare organizations and state and federal governments have taken actions to mitigate recurring shortages, additional work and innovation are needed to develop longer-term solutions to improve healthcare workforce resiliency. The critical role of those with specialized training in infection prevention, including healthcare epidemiologists, was well-demonstrated in pandemic preparedness and response. The COVID-19 pandemic underscored the need to support growth in these fields.3 This commentary outlines the need to develop the US healthcare workforce in preparation for future pandemics.
Throughout history, pandemics and their aftereffects have spurred society to make substantial improvements in healthcare. After the Black Death in 14th century Europe, changes were made to elevate standards of care and nutrition that resulted in improved life expectancy.1 The 1918 influenza pandemic spurred a movement that emphasized public health surveillance and detection of future outbreaks and eventually led to the creation of the World Health Organization Global Influenza Surveillance Network.2 In the present, the COVID-19 pandemic exposed many of the pre-existing problems within the US healthcare system, which included (1) a lack of capacity to manage a large influx of contagious patients while simultaneously maintaining routine and emergency care to non-COVID patients; (2) a “just in time” supply network that led to shortages and competition among hospitals, nursing homes, and other care sites for essential supplies; and (3) longstanding inequities in the distribution of healthcare and the healthcare workforce. The decades-long shift from domestic manufacturing to a reliance on global supply chains has compounded ongoing gaps in preparedness for supplies such as personal protective equipment and ventilators. Inequities in racial and socioeconomic outcomes highlighted during the pandemic have accelerated the call to focus on diversity, equity, and inclusion (DEI) within our communities. The pandemic accelerated cooperation between government entities and the healthcare system, resulting in swift implementation of mitigation measures, new therapies and vaccinations at unprecedented speeds, despite our fragmented healthcare delivery system and political divisions. Still, widespread misinformation or disinformation and political divisions contributed to eroded trust in the public health system and prevented an even uptake of mitigation measures, vaccines and therapeutics, impeding our ability to contain the spread of the virus in this country.3 Ultimately, the lessons of COVID-19 illustrate the need to better prepare for the next pandemic. Rising microbial resistance, emerging and re-emerging pathogens, increased globalization, an aging population, and climate change are all factors that increase the likelihood of another pandemic.4
The Society for Healthcare Epidemiology in America (SHEA) strongly supports modernization of data collection processes and the creation of publicly available data repositories that include a wide variety of data elements and mechanisms for securely storing both cleaned and uncleaned data sets that can be curated as clinical and research needs arise. These elements can be used for clinical research and quality monitoring and to evaluate the impacts of different policies on different outcomes. Achieving these goals will require dedicated, sustained and long-term funding to support data science teams and the creation of central data repositories that include data sets that can be “linked” via a variety of different mechanisms and also data sets that include institutional and state and local policies and procedures. A team-based approach to data science is strongly encouraged and supported to achieve the goal of a sustainable, adaptable national shared data resource.
The learning hospital is distinguished by ceaseless evolution of erudition, enhancement, and implementation of clinical best practices. We describe a model for the learning hospital within the framework of a hospital infection prevention program and argue that a critical assessment of safety practices is possible without significant grant funding. We reviewed 121 peer-reviewed manuscripts published by the VCU Hospital Infection Prevention Program over 16 years. Publications included quasi-experimental studies, observational studies, surveys, interrupted time series analyses, and editorials. We summarized the articles based on their infection prevention focus, and we provide a brief summary of the findings. We also summarized the involvement of nonfaculty learners in these manuscripts as well as the contributions of grant funding. Despite the absence of significant grant funding, infection prevention programs can critically assess safety strategies under the learning hospital framework by leveraging a diverse collaboration of motivated nonfaculty learners. This model is a valuable adjunct to traditional grant-funded efforts in infection prevention science and is part of a successful horizontal infection control program.
We assessed the impact of an embedded electronic medical record decision-support matrix (Cerner software system) for the reduction of hospital-onset Clostridioides difficile. A critical review of 3,124 patients highlighted excessive testing frequency in an academic medical center and demonstrated the impact of decision support following a testing fidelity algorithm.
The Hamilton Depression Rating Scale (HAMD) and the Beck Depression Inventory (BDI) are the most frequently used observer-rated and self-report scales of depression, respectively. It is important to know what a given total score or a change score from baseline on one scale means in relation to the other scale.
Methods
We obtained individual participant data from the randomised controlled trials of psychological and pharmacological treatments for major depressive disorders. We then identified corresponding scores of the HAMD and the BDI (369 patients from seven trials) or the BDI-II (683 patients from another seven trials) using the equipercentile linking method.
Results
The HAMD total scores of 10, 20 and 30 corresponded approximately with the BDI scores of 10, 27 and 42 or with the BDI-II scores of 13, 32 and 50. The HAMD change scores of −20 and −10 with the BDI of −29 and −15 and with the BDI-II of −35 and −16.
Conclusions
The results can help clinicians interpret the HAMD or BDI scores of their patients in a more versatile manner and also help clinicians and researchers evaluate such scores reported in the literature or the database, when scores on only one of these scales are provided. We present a conversion table for future research.
Common mental health problems affect a quarter of the population. Online cognitive–behavioural therapy (CBT) is increasingly used, but the factors modulating response to this treatment modality remain unclear.
Aims
This study aims to explore the demographic and clinical predictors of response to one-to-one CBT delivered via the internet.
Method
Real-world clinical outcomes data were collected from 2211 NHS England patients completing a course of CBT delivered by a trained clinician via the internet. Logistic regression analyses were performed using patient and service variables to identify significant predictors of response to treatment.
Results
Multiple patient variables were significantly associated with positive response to treatment including older age, absence of long-term physical comorbidities and lower symptom severity at start of treatment. Service variables associated with positive response to treatment included shorter waiting times for initial assessment and longer treatment durations in terms of the number of sessions.
Conclusions
Knowledge of which patient and service variables are associated with good clinical outcomes can be used to develop personalised treatment programmes, as part of a quality improvement cycle aiming to drive up standards in mental healthcare. This study exemplifies translational research put into practice and deployed at scale in the National Health Service, demonstrating the value of technology-enabled treatment delivery not only in facilitating access to care, but in enabling accelerated data capture for clinical research purposes.
Declaration of interest
A.C., S.B., V.T., K.I., S.F., A.R., A.H. and A.D.B. are employees or board members of the sponsor. S.R.C. consults for Cambridge Cognition and Shire. Keywords: Anxiety disorders; cognitive behavioural therapies; depressive disorders; individual psychotherapy
The entry point for my response to Bryan Cheyette’s thought-provoking essay on the difficulties of bringing together Jewish studies and postcolonial studies is a discussion of a recent national controversy in South Africa that, at first glance, seems to endorse Cheyette’s cautionary tale about how “actionism” tends to negate nuance and critical engagement. The response draws on this controversy to make some tentative observations about why Cheyette’s argument does not adequately acknowledge the consequences of the profound political, ideological, and economic transformations of post–World War II Jewry.
The influence of baseline severity has been examined for antidepressant medications but has not been studied properly for cognitive–behavioural therapy (CBT) in comparison with pill placebo.
Aims
To synthesise evidence regarding the influence of initial severity on efficacy of CBT from all randomised controlled trials (RCTs) in which CBT, in face-to-face individual or group format, was compared with pill-placebo control in adults with major depression.
Method
A systematic review and an individual-participant data meta-analysis using mixed models that included trial effects as random effects. We used multiple imputation to handle missing data.
Results
We identified five RCTs, and we were given access to individual-level data (n = 509) for all five. The analyses revealed that the difference in changes in Hamilton Rating Scale for Depression between CBT and pill placebo was not influenced by baseline severity (interactionP = 0.43). Removing the non-significant interaction term from the model, the difference between CBT and pill placebo was a standardised mean difference of –0.22 (95% CI –0.42 to –0.02,P = 0.03, I2 = 0%).
Conclusions
Patients suffering from major depression can expect as much benefit from CBT across the wide range of baseline severity. This finding can help inform individualised treatment decisions by patients and their clinicians.