We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Older adults with treatment-resistant depression (TRD) benefit more from treatment augmentation than switching. It is useful to identify moderators that influence these treatment strategies for personalised medicine.
Aims
Our objective was to test whether age, executive dysfunction, comorbid medical burden, comorbid anxiety or the number of previous adequate antidepressant trials could moderate the superiority of augmentation over switching. A significant moderator would influence the differential effect of augmentation versus switching on treatment outcomes.
Method
We performed a preplanned moderation analysis of data from the Optimizing Outcomes of Treatment-Resistant Depression in Older Adults (OPTIMUM) randomised controlled trial (N = 742). Participants were 60 years old or older with TRD. Participants were either (a) randomised to antidepressant augmentation with aripiprazole (2.5–15 mg), bupropion (150–450 mg) or lithium (target serum drug level 0.6 mmol/L) or (b) switched to bupropion (150–450 mg) or nortriptyline (target serum drug level 80–120 ng/mL). Treatment duration was 10 weeks. The two main outcomes of this analysis were (a) symptom improvement, defined as change in Montgomery–Asberg Depression Rating Scale (MADRS) scores from baseline to week 10 and (b) remission, defined as MADRS score of 10 or less at week 10.
Results
Of the 742 participants, 480 were randomised to augmentation and 262 to switching. The number of adequate previous antidepressant trials was a significant moderator of depression symptom improvement (b = −1.6, t = −2.1, P = 0.033, 95% CI [−3.0, −0.1], where b is the coefficient of the relationship (i.e. effect size), and t is the t-statistic for that coefficient associated with the P-value). The effect was similar across all augmentation strategies. No other putative moderators were significant.
Conclusions
Augmenting was superior to switching antidepressants only in older patients with fewer than three previous antidepressant trials. This suggests that other intervention strategies should be considered following three or more trials.
In federal states, immigration regulation is frequently shared with subnational levels of governance. In Canada, provinces even have immigration selection powers. This is significant, as an increasing proportion of new permanent residents (1) are now selected by a province, and (2) previously held a temporary residence permit. However, the ways in which the interaction between the federalization of immigration and two-step migration impacts migrants’ experiences is still not well understood. This article contributes to the literature by providing deeper insights into the effects of the federalization of immigration on migrants. Based on the case of Quebec, it analyzes how the federalization of two-step migration affects migrants’ transitions from temporary to permanent status, whereby immigrants become “included.” The article contends that rather than functioning as an administrative process of linear inclusion, the federalization of two-step migration produces an ambiguous process of inclusion which reflects contradictory federal-provincial political agendas and tensions.
It remains unclear which individuals with subthreshold depression benefit most from psychological intervention, and what long-term effects this has on symptom deterioration, response and remission.
Aims
To synthesise psychological intervention benefits in adults with subthreshold depression up to 2 years, and explore participant-level effect-modifiers.
Method
Randomised trials comparing psychological intervention with inactive control were identified via systematic search. Authors were contacted to obtain individual participant data (IPD), analysed using Bayesian one-stage meta-analysis. Treatment–covariate interactions were added to examine moderators. Hierarchical-additive models were used to explore treatment benefits conditional on baseline Patient Health Questionnaire 9 (PHQ-9) values.
Results
IPD of 10 671 individuals (50 studies) could be included. We found significant effects on depressive symptom severity up to 12 months (standardised mean-difference [s.m.d.] = −0.48 to −0.27). Effects could not be ascertained up to 24 months (s.m.d. = −0.18). Similar findings emerged for 50% symptom reduction (relative risk = 1.27–2.79), reliable improvement (relative risk = 1.38–3.17), deterioration (relative risk = 0.67–0.54) and close-to-symptom-free status (relative risk = 1.41–2.80). Among participant-level moderators, only initial depression and anxiety severity were highly credible (P > 0.99). Predicted treatment benefits decreased with lower symptom severity but remained minimally important even for very mild symptoms (s.m.d. = −0.33 for PHQ-9 = 5).
Conclusions
Psychological intervention reduces the symptom burden in individuals with subthreshold depression up to 1 year, and protects against symptom deterioration. Benefits up to 2 years are less certain. We find strong support for intervention in subthreshold depression, particularly with PHQ-9 scores ≥ 10. For very mild symptoms, scalable treatments could be an attractive option.
Objectives/Goals: We hypothesized that the bulk transcriptomic profiling of blood collected from within the ischemic vasculature during an acute ischemic stroke with large vessel occlusion (LVO) will contain unique biomarkers that are different from the peripheral circulation and may provide much-needed insight into the underlying pathogenesis of LVO in humans. Methods/Study Population: The transcriptomic biomarkers of Inflammation in Large Vessel Ischemic Stroke pilot study prospectively enrolled patients ≥ 18 years of age with an anterior circulation LVO, treated with endovascular thrombectomy (EVT). Two periprocedural arterial blood samples were obtained (DNA/RNA Shield™ tubes, Zymo Research); 1) proximal to the thrombus, from the internal carotid artery and 2) immediately downstream from the thrombus, by puncturing through the thrombus with the microcatheter. Bulk RNA sequencing was performed and differential gene expression was identified using the Wilcoxon signed rank test for paired data, adjusting for age, sex, use of thrombolytics, last known well to EVT, and thrombolysis in cerebral infarction score. Bioinformatic pathway analyses were computed using MCODE and reactome. Results/Anticipated Results: From May to October 2022, 20 patients were screened and 13 were enrolled (median age 68 [SD 10.1], 47% male, 100% white). A total of 608 differentially expressed genes were found to be significant (p-value) Discussion/Significance of Impact: These results provide evidence of significant gene expression changes occurring within the ischemic vasculature of the brain during LVO, which may correlate with larger ischemic infarct volumes and worse functional outcomes at 90 days. Future studies with larger sample sizes are supported by this work.
This study is the first to attempt to isolate a relationship between cognitive activity and equilibration to a Nash Equilibrium. Subjects, while undergoing fMRI scans of brain activity, participated in second price auctions against a single competitor following predetermined strategy that was unknown to the subject. For this auction there is a unique strategy that will maximize the subjects’ earnings, which is also a Nash equilibrium of the associated game theoretic model of the auction. As is the case with all games, the bidding strategies of subjects participating in second price auctions most often do not reflect the equilibrium bidding strategy at first but with experience, typically exhibit a process of equilibration, or convergence toward the equilibrium. This research is focused on the process of convergence.
In the data reported here subjects participated in sixteen auctions, after which all subjects were told the strategy that will maximize their revenues, the theoretical equilibrium. Following that announcement, sixteen more auctions were performed. The question posed by the research concerns the mental activity that might accompany equilibration as it is observed in the bidding behavior. Does brain activation differ between being equilibrated and non-equilibrated in the sense of a bidding strategy? If so, are their differences in the location of activation during and after equilibration? We found significant activation in the frontal pole especially in Brodmann's area 10, the anterior cingulate cortex, the amygdala and the basal forebrain. There was significantly more activation in the basal forebrain and the anterior cingulate cortex during the first sixteen auctions than in the second sixteen. The activity in the amygdala shifted from the right side to the left after the solution was given.
To aid in prediction of turbulent boundary layer flows over rough surfaces, a new model is proposed to estimate hydrodynamic roughness based solely on geometric surface information. The model is based on a fluid-mechanics motivated geometric parameter called the wind-shade factor. Sheltering is included using a rapid algorithm adapted from the landscape shadow literature, while local pressure drag is estimated using a piecewise potential flow approximation. Similarly to evaluating traditional surface parameters such as skewness or average slope magnitude, the wind-shade factor is purely geometric and can be evaluated efficiently from knowing the surface elevation map and the mean flow direction. The wind-shade roughness model is applied to over 100 different surfaces available in a public roughness database and some others, and the predicted sandgrain-roughness heights are compared with measured values. Effects of various model ingredients are analysed, and transitionally rough surfaces are treated by adding a term representing the viscous stress component.
This study investigates practicing clinician and staff perspectives on potential protocol modifications for the “Nasal Irrigation, Oral Antibiotics, and Subgroup Targeting for Effective Management of Acute Sinusitis” (NOSES) study, a pragmatic randomized controlled trial aiming at improving acute rhinosinusitis management. Focus groups with clinicians and staff at the pretrial stage recommended expanding participant age inclusion criteria, incorporating patients with COVID-19, and shortening the supportive care phase. Participants also discussed patient engagement and recruitment strategies. These practical insights contribute to optimizing the NOSES trial design and underscore the value of qualitative inquiries and healthcare stakeholder engagement in informing clinical trial design.
The posterior pharyngeal wall is an anatomical subsite of both the oropharynx and hypopharynx. The treatment outcomes of squamous cell carcinoma (SCC) of these sites are generally published together, which makes the interpretation of data challenging. The aim of this analysis was to determine if there is any difference in the treatment outcomes of these two rare disease entities.
Materials and Methods
Retrospetive analysis showed that the posterior pharyngeal wall was the primary subsite in 17 patients (1.65 per cent) out of 1031 patients with oropharyngeal SCC, and in 23 patients (11.73 per cent) out of 196 patients with hypopharyngeal SCC.
Results
The five-year overall survival was 45 per cent for oropharyngeal origin and 53 per cent for hypopharyngeal origin patients. There was no significant difference in survival and locoregional control between these two groups of patients.
Conclusion
Squamous cell carcinoma of the posterior pharyngeal wall is a rare entity, which in our series represents 1.65 per cent of oropharyngeal cases and 11.73 per cent of hypopharyngeal tumours. There was no difference in treatment outcomes between the two groups.
Coastal wetlands are hotspots of carbon sequestration, and their conservation and restoration can help to mitigate climate change. However, there remains uncertainty on when and where coastal wetland restoration can most effectively act as natural climate solutions (NCS). Here, we synthesize current understanding to illustrate the requirements for coastal wetland restoration to benefit climate, and discuss potential paths forward that address key uncertainties impeding implementation. To be effective as NCS, coastal wetland restoration projects will accrue climate cooling benefits that would not occur without management action (additionality), will be implementable (feasibility) and will persist over management-relevant timeframes (permanence). Several issues add uncertainty to understanding if these minimum requirements are met. First, coastal wetlands serve as both a landscape source and sink of carbon for other habitats, increasing uncertainty in additionality. Second, coastal wetlands can potentially migrate outside of project footprints as they respond to sea-level rise, increasing uncertainty in permanence. To address these first two issues, a system-wide approach may be necessary, rather than basing cooling benefits only on changes that occur within project boundaries. Third, the need for NCS to function over management-relevant decadal timescales means methane responses may be necessary to include in coastal wetland restoration planning and monitoring. Finally, there is uncertainty on how much data are required to justify restoration action. We summarize the minimum data required to make a binary decision on whether there is a net cooling benefit from a management action, noting that these data are more readily available than the data required to quantify the magnitude of cooling benefits for carbon crediting purposes. By reducing uncertainty, coastal wetland restoration can be implemented at the scale required to significantly contribute to addressing the current climate crisis.
To describe the epidemiology of healthcare-associated Clostridioides difficile infection (HA-CDI) in two Québec hospitals in Canada following the 2003 epidemic and to evaluate the impact of antibiotic stewardship on the incidence of HA-CDI and the NAP1/027 strain.
Design:
Time-series analysis.
Setting:
Two Canadian tertiary care hospitals based in Montréal, Québec.
Patients:
Patients with a positive assay for toxigenic C. difficile were identified through infection control surveillance. All cases of HA-CDI, defined as symptoms occurring after 72 hours of hospital admission or within 4 weeks of hospitalization, were included.
Methods:
The incidence of HA-CDI and antibiotic utilization from 2003 to 2020 were analyzed with available C. difficile isolates. The impact of antibiotic utilization on HA-CDI incidence was estimated by a dynamic regression time-series model. Antibiotic utilization and the proportion of NAP1/027 strains were compared biannually for available isolates from 2010 to 2020.
Results:
The incidence of HA-CDI decreased between 2003 and 2020 at both hospitals from 26.5 cases per 10,000 patient-days in 2003 to 4.9 cases per 10,000 patient-days in 2020 respectively. Over the study period, there were an increase in the utilization of third-generation cephalosporins and a decrease in usage of fluoroquinolones and clindamycin. A decrease in fluoroquinolone utilization was associated with a significant decrease in HA-CDI incidence as well as decrease in the NAP1/027 strain by approximately 80% in both hospitals.
Conclusions:
Decreased utilization of fluoroquinolones in two Québec hospitals was associated with a decrease in the incidence of HA-CDI and a genotype shift from NAP1/027 to non-NAP1/027 strains.
Paediatric cardiac electrophysiologists are essential in CHD inpatient care, but their involvement is typically limited to consultation with individual patients. In our integrated heart centre, an electrophysiologist reviews all cardiac inpatient telemetry over the preceding 24 hours and participates in daily multidisciplinary morning report. This study investigates the impact of the strategy of consistent, formalised electrophysiologist presence at multidisciplinary morning report.
Methods:
This is a single-centre, prospective, observational study of electrophysiologist participation in patient encounters during heart centre multidisciplinary morning report from 10/20/2021 to 10/31/2022. Multidisciplinary morning report includes discussion of all intensive care and non-intensive care cardiac patients. An encounter was defined as reporting on one patient for one day. Electrophysiologists were initially blinded to observations.
Results:
Two electrophysiologists were observed over 215 days encompassing 6413 patient encounters. Electrophysiologists made comments on 581(9.1%) encounters in 234 unique patients with diverse diagnoses, equating to a median of 3[interquartile range:1–4] encounters per day. These included identifications of arrhythmias and describing electrocardiographic findings. Recommendation to change management occurred in 282(48.5%) encounters, most commonly regarding medications (n = 142, 24.4%) or pacemaker management (n = 48, 8.3%). Of the 581 encounters, there were 61(10.5%) in which they corrected another physician’s interpretation of rhythm or electrocardiogram.
Conclusion:
Routine electrophysiologist involvement in multidisciplinary morning report provides significant, frequent, and timely input in patient management by identifying precise rhythm-related diagnoses and allowing nuanced, patient-specific medication and pacemaker management of all cardiac patients, not just those consulted. Electrophysiologist presence at multidisciplinary morning report is a vital resource and this practice should be considered at integrated paediatric cardiac centres.
The personalised oncology paradigm remains challenging to deliver despite technological advances in genomics-based identification of actionable variants combined with the increasing focus of drug development on these specific targets. To ensure we continue to build concerted momentum to improve outcomes across all cancer types, financial, technological and operational barriers need to be addressed. For example, complete integration and certification of the ‘molecular tumour board’ into ‘standard of care’ ensures a unified clinical decision pathway that both counteracts fragmentation and is the cornerstone of evidence-based delivery inside and outside of a research setting. Generally, integrated delivery has been restricted to specific (common) cancer types either within major cancer centres or small regional networks. Here, we focus on solutions in real-world integration of genomics, pathology, surgery, oncological treatments, data from clinical source systems and analysis of whole-body imaging as digital data that can facilitate cost-effectiveness analysis, clinical trial recruitment, and outcome assessment. This urgent imperative for cancer also extends across the early diagnosis and adjuvant treatment interventions, individualised cancer vaccines, immune cell therapies, personalised synthetic lethal therapeutics and cancer screening and prevention. Oncology care systems worldwide require proactive step-changes in solutions that include inter-operative digital working that can solve patient centred challenges to ensure inclusive, quality, sustainable, fair and cost-effective adoption and efficient delivery. Here we highlight workforce, technical, clinical, regulatory and economic challenges that prevent the implementation of precision oncology at scale, and offer a systematic roadmap of integrated solutions for standard of care based on minimal essential digital tools. These include unified decision support tools, quality control, data flows within an ethical and legal data framework, training and certification, monitoring and feedback. Bridging the technical, operational, regulatory and economic gaps demands the joint actions from public and industry stakeholders across national and global boundaries.
Most countries in Africa deployed digital solutions to monitor progress in rolling out COVID-19 vaccines. A rapid assessment of existing data systems for COVID-19 vaccines in the African region was conducted between May and July 2022, in 23 countries. Data were collected through interviews with key informants, identified among senior staff within Ministries of Health, using a semi-structured electronic questionnaire. At vaccination sites, individual data were collected in paper-based registers in five countries (21.7%), in an electronic registry in two countries (8.7%), and in the remaining 16 countries (69.6%) using a combination of paper-based and electronic registries. Of the 18 countries using client-based digital registries, 11 (61%) deployed the District Health Information System 2 Tracker, and seven (39%), a locally developed platform. The mean percentage of individual data transcribed in the electronic registries was 61% ± 36% standard deviation. Unreliable Internet coverage (100% of countries), non-payment of data clerks’ incentives (89%), and lack of electronic devices (89%) were the main reasons for the suboptimal functioning of digital systems quoted by key informants. It is critical for investments made and experience acquired in deploying electronic platforms for COVID-19 vaccines to be leveraged to strengthen routine immunization data management.
Packed red blood cell transfusions occur frequently after congenital heart surgery to augment haemodynamics, with limited understanding of efficacy. The goal of this study was to analyse the hemodynamic response to packed red blood cell transfusions in a single cohort, as “proof-of-concept” utilising high-frequency data capture of real-time telemetry monitoring.
Methods:
Retrospective review of patients after the arterial switch operation receiving packed red blood cell transfusions from 15 July 2020 to 15 July 2021. Hemodynamic parameters were collected from a high-frequency data capture system (SickbayTM) continuously recording vital signs from bedside monitors and analysed in 5-minute intervals up to 6 hours before, 4 hours during, and 6 hours after packed red blood cell transfusions—up to 57,600 vital signs per packed red blood cell transfusions. Variables related to oxygen balance included blood gas co-oximetry, lactate levels, near-infrared spectroscopy, and ventilator settings. Analgesic, sedative, and vasoactive infusions were also collected.
Results:
Six patients, at 8.5[IQR:5-22] days old and weighing 3.1[IQR:2.8-3.2]kg, received transfusions following the arterial switch operation. There were 10 packed red blood cell transfusions administered with a median dose of 10[IQR:10-15]mL/kg over 169[IQR:110-190]min; at median post-operative hour 36[IQR:10-40]. Significant increases in systolic and mean arterial blood pressures by 5-12.5% at 3 hours after packed red blood cell transfusions were observed, while renal near-infrared spectroscopy increased by 6.2% post-transfusion. No significant changes in ventilation, vasoactive support, or laboratory values related to oxygen balance were observed.
Conclusions:
Packed red blood cell transfusions given after the arterial switch operation increased arterial blood pressure by 5-12.5% for 3 hours and renal near-infrared spectroscopy by 6.2%. High-frequency data capture systems can be leveraged to provide novel insights into the hemodynamic response to commonly used therapies such as packed red blood cell transfusions after paediatric cardiac surgery.
Translation is the process of turning observations in the research laboratory, clinic, and community into interventions that improve people’s health. The Clinical and Translational Science Awards (CTSA) program is a National Center for Advancing Translational Sciences (NCATS) initiative to advance translational science and research. Currently, 64 “CTSA hubs” exist across the nation. Since 2006, the Houston-based Center for Clinical Translational Sciences (CCTS) has assembled a well-integrated, high-impact hub in Texas that includes six partner institutions within the state, encompassing ∼23,000 sq. miles and over 16 million residents. To achieve the NCATS goal of “more treatments for all people more quickly,” the CCTS promotes diversity and inclusion by integrating underrepresented populations into clinical studies, workforce training, and career development. In May 2023, we submitted the UM1 application and six “companion” proposals: K12, R25, T32-Predoctoral, T32-Postdoctoral, and RC2 (two applications). In October 2023, we received priority scores for the UM1 (22), K12 (25), T32-Predoctoral (20), and T32-Postdoctoral (23), which historically fall within the NCATS funding range. This report describes the grant preparation and submission approach, coupled with data from an internal survey designed to assimilate feedback from principal investigators, writers, reviewers, and administrative specialists. Herein, we share the challenges faced, the approaches developed, and the lessons learned.
Background: Carbapenem-resistant Acinetobacter (CRA) bacteria are an urgent public health threat. Accurate and timely testing of CRA is important for proper infection control practices to minimize spread. In 2017, the CDC estimated 8,500 CRA cases among hospitalized patients, 700 deaths, and $281 million in attributable healthcare costs. Treatment options are extremely limited for carbapenem-resistant Acinetobacter baumannii (CRAB) infections, making CRAB a unique concern. Colonization screening is a valuable tool for containment but requires sampling of 4 body sites. Identifying a reliable specimen collection site for CRAB is important to inform public health recommendations as screening can cost healthcare facilities valuable time and resources. Methods: Results of all screening specimens of patients with at least 1 site positive for CRAB on a unique collection date were extracted from the Southeast Regional data of Antimicrobial Resistance Lab Network (SEARLN) data. Non-CRAB screening and screenings that did not yield at least 1 positive result on a single collection date were excluded. We also limited our data to include only the following screening sites, which have been validated by the Tennessee Department of Health’s State Public Health Laboratory: axilla and groin, rectal, sputum, and wound. For each specimen source, we calculated the percentage of positive specimen among CRAB-colonized patients. Data were extracted and analyzed using SAS version 9.4 software. Results: The SEARLN data contained 594 CRAB screening specimens collected over 4 years, 2018 through 2021, and 486 of those specimens yielded CRAB. For CRAB-colonized patients screened in this study, wound specimens had the highest positivity rate at 93.4% (95% CI, 89.9%–96.9%) of samples culturing CRAB. Sputum followed at 87.7%, then axilla and groin at 77.6% and rectal at 59.7%. Conclusions: Wound specimens produced the highest proportion of positive cultures among CRAB-positive patients, making them the sample type with the highest prevalence in our study. For healthcare facilities with limited time and resources seeking to optimize their CRAB screening process, wound specimens may be the most reliable single site for detecting CRAB colonization in patients with an open wound. When a wound is not present, sputum may be a good alternative single-source collection site. More research should be conducted before CRAB screening recommendations are updated.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
Misdiagnosis of bacterial pneumonia increases risk of exposure to inappropriate antibiotics and adverse events. We developed a diagnosis calculator (https://calculator.testingwisely.com) to inform clinical diagnosis of community-acquired bacterial pneumonia using objective indicators, including incidence of disease, risk factors, and sensitivity and specificity of diagnostic tests, that were identified through literature review.
The Dynamic Appraisal of Situational Aggression (DASA) is one of a few instruments designed for the prediction of violence specifically for inpatient populations. It is important that risk assessment tools demonstrate clinical utility, and that barriers to successful implementation are addressed. If successful, the tool should not only predict risk, but lead to the utilisation of interventions intended to manage and reduce risk. The aim of this study is to learn more about the acceptability of the tool (adherence), it's outputs (nursing interventions), and the outcomes (inpatient aggression and violence). Understanding more about the relationship and processes between an intervention and its outcomes is a key step in intervention evaluation.
Methods
Data were collected over a three-month period within a medium secure forensic hospital. A total of 43 patients were included for analysis.
Categories of nursing intervention were coded and content analysis of electronic health records analysed. Incidents of aggression/violence to others was recorded as aggression to patient and aggression to staff. Data were gathered on the completion of the DASA score for all patients for each 24-hour period. A DASA score of 2–3 for moderate risk and ≥4 for high risk was used. The change in DASA score (before and after intervention) and frequency of incidents was calculated for each intervention implemented.
Results
The average adherence of the DASA tool was 58.82% (Range 1.09% - 90.02%). The most frequent intervention following a moderate and high DASA score was that no interventions were provided. The second most frequent outcome following a high score was a focussed discussion with the patient, the use of increased monitoring and the use of seclusion. For those patients that recorded a high score on the DASA tool, eight of those scores were followed by an incident of aggression (n = 8 / 50%). There was no statistically significant difference between the change in DASA scores between interventions implemented, for both high and moderate scores.
Conclusion
The ultimate goal of risk assessment is the management and prevention of risk. Thus, if a high score does not result in strategies for intervention, it renders the assessment process worthless. A recommendation for future clinical practice would be the systematic recording of interventions and risk management strategies when in receipt of a high score on the DASA. Greater operationalisation of risk management strategies and their ability to reduce aggression is needed to enhance risk assessment research and clinical practice.
To assess the degree of compliance for clozapine serum level timing post clozapine dose using synnovis (previously viapath) and maudsley 14th edition guidelines in a medium secure hospital.
Methods
Electronic prescribing systems were reviewed on each ward to identify clozapine established patients. Viapaths electronic database was reviewed from 01 May 2021 (18 months) and recorded timings were compared to guidelines in Maudsley, 14th edition and synnovis. 12 hours was used as the guideline post dose in OD (once daily), BD (twice daily)/TDS (three times daily) regimes following the night time or evening dose.
Results
4 different types of clozapine prescribing regimes were identified – OD, BD am/nocte or evening, BD pm/night or evening and TDS. 45 patients in total.
OD 12 - most recent bloods 8/12 patients were >15 hours. Total samples in 18 months >12 hours 63.6% (38/55). Total samples in 18 months >15 hours 56.3% (31/55). In the OD group 26 samples are from 2 patients both of whom have samples taken later than 14 hours.
BD 26 - BD mane evening/nocte - 2/5 most recent samples were >15 hours. Total samples in 18 months >12 hours 51.6% (32/62). Total samples in 18 months >15 hours 12.9%(9/62). Need to consider – evening dose time 18:00 compared to 22:00 adding more time.
BD pm evening/night 5 - 2/5 patients > 15 hours. 4/5 patients not at 13-14 hours. Total samples in 18 months >12 hours 75% (9/12). Total samples in 18 months >15 hours 25% (3/12)
TDS 2 - 0 patient > 15 hours, 1 patient at 14 hours. Total samples at 18 months >12 hours - 41.7% (5/12). Total samples at 18 months >15 hours - 16.7% (2/12).
Conclusion
Higher than expected clozapine serum level timing inaccuracy was demonstrated, markedly in bespoke regimes - OD (56.3%) or BD pm evening/night regimes (25%) compared to traditional regimes (TDS 16.7 %, BD am nocte/evening 12.9%). Contributing factors are a knowledge gap amongst services, Maudsley guidelines don't consider bespoke timings when advising trough levels. Findings suggest bespoke regimes need greater consideration when assessing clozapine serum levels.
Action from this initial audit involves informing teams regarding recent samples which are >15 hours post dose. Service education highlighting safety concerns of potential underestimation of clozapine serum level. Guideline change with support from pharmacy. Re audit in 12 months.