We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bronze Age–Early Iron Age tin ingots recovered from four Mediterranean shipwrecks off the coasts of Israel and southern France can now be provenanced to tin ores in south-west Britain. These exceptionally rich and accessible ores played a fundamental role in the transition from copper to full tin-bronze metallurgy across Europe and the Mediterranean during the second millennium BC. The authors’ application of a novel combination of three independent analyses (trace element, lead and tin isotopes) to tin ores and artefacts from Western and Central Europe also provides the foundation for future analyses of the pan-continental tin trade in later periods.
The Cox duration model serves as the basis for more complex duration models like competing risks, repeated events, and multistate models. These models make a number of assumptions, many of which can be assessed empirically, sometimes for substantive ends. We use Monte Carlo simulations to show the order in which practitioners assess these assumptions can impact the model’s final specification, and ultimately, can produce misleading inferences. We focus on three assumptions regarding model specification decisions: proportional hazards (PH), stratified baseline hazards, and stratum-specific covariate effects. Our results suggest checking the PH assumption before checking for stratum-specific covariate effects tends to produce the correct final specification most frequently. We reexamine a recent study of the timing of GATT/WTO applications to illustrate our points.
Coastal landforms and associated archaeological records are at risk of erosion from a combination of rising sea levels and increasingly frequent high-intensity storms. Improved understanding of this risk can be gained by braiding archaeological and geomorphological methodologies with Indigenous knowledge.1 In this article, archaeological, geomorphological and mātauranga (a form of Indigenous knowledge) are used to analyse a prograded Holocene foredune barrier in northern Aotearoa/New Zealand. Anthropogenic deposits within dune stratigraphy are radiocarbon-dated and used as chronological markers to constrain coastal evolution, alongside geomorphological analyses of topographic data, historical aerial photographs and satellite imagery. These investigations revealed that the barrier is eroding at a rate of 0.45 m/year. A midden in the foredune, which has been radiocarbon dated to 224–270 B.P. (95% Confidence), has been exposed by coastal erosion, confirming that the barrier is in the most eroded state it has been within the past ~300 years. Vertical stratigraphy reveals the presence of midden and palaeosol deposits capped by dune sand deposits in the foredune, indicating that vertical accretion of the foredune continued over the last ~200 years, despite the barrier now being in an eroding state. Mātauranga played a vital role in this project, as it was the coastal taiao (environmental) monitoring unit of Patuharakeke (a Māori sub-tribe) that discovered the midden. The ecological mātauranga shared also played a vital role in this project, adding experiential evidence to empirical observations. The work of local Indigenous groups, like Patuharakeke, demonstrates the active use of mātauranga, woven with Western science methods to preserve or capture the knowledge contained within archaeological sites at risk of being lost to coastal erosion. In this study, we present a method for weaving mātauranga, geomorphological and archaeological approaches to gain a deeper understanding of coastal landscape development.
Cannabis use severely affects the outcome of people with psychotic disorders, yet there is a lack of treatments. To address this, in 2019 the National Health Service (NHS) Cannabis Clinic for Psychosis (CCP) was developed to support adults suffering from psychosis to reduce and/or stop their cannabis use.
Aims
Examine outcome data from the first 46 individuals to complete the CCP's intervention.
Method
The sample (N = 46) consisted of adults (aged ≥ 18) with psychosis under the care of the South London and Maudsley NHS Foundation Trust, referred to the CCP between January 2020 and February 2023, who completed their intervention by September 2023. Clinical and functional measures were collected before (T0) and after (T1) the CCP intervention (one-to-one sessions and peer group attendance). Primary outcomes were changes in the Cannabis Use Disorders Identification Test-Revised (CUDIT-R) score and pattern of cannabis use. Secondary outcomes included T0–T1 changes in measures of delusions, paranoia, depression, anxiety and functioning.
Results
A reduction in the mean CUDIT-R score was observed between T0 (mean difference = 17.10, 95% CI = 15.54–18.67) and T1, with 73.91% of participants achieving abstinence and 26.09% reducing the frequency and potency of their use. Significant improvements in all clinical and functional outcomes were observed, with 90.70% being in work or education at T1 compared with 8.70% at T0. The variance in CUDIT-R scores explained between 34 and 64% of the variance in our secondary measures.
Conclusions
The CCP intervention is a feasible strategy to support cannabis use cessation/reduction and improve clinical and functional outcomes of people with psychotic disorders.
It remains unknown whether severe mental disorders contribute to fatally harmful effects of physical illness.
Aims
To investigate the risk of all-cause death and loss of life-years following the onset of a wide range of physical health conditions in people with severe mental disorders compared with matched counterparts who had only these physical health conditions, and to assess whether these associations can be fully explained by this patient group having more clinically recorded physical illness.
Method
Using Czech national in-patient register data, we identified individuals with 28 physical health conditions recorded between 1999 and 2017, separately for each condition. In these people, we identified individuals who had severe mental disorders recorded before the physical health condition and exactly matched them with up to five counterparts who had no recorded prior severe mental disorders. We estimated the risk of all-cause death and lost life-years following each of the physical health conditions in people with pre-existing severe mental disorders compared with matched counterparts without severe mental disorders.
Results
People with severe mental disorders had an elevated risk of all-cause death following the onset of 7 out of 9 broadly defined and 14 out of 19 specific physical health conditions. People with severe mental disorders lost additional life-years following the onset of 8 out 9 broadly defined and 13 out of 19 specific physical health conditions. The vast majority of results remained robust after considering the potentially confounding role of somatic multimorbidity and other clinical and sociodemographic factors.
Conclusions
A wide range of physical illnesses are more likely to result in all-cause death in people with pre-existing severe mental disorders. This premature mortality cannot be fully explained by having more clinically recorded physical illness, suggesting that physical disorders are more likely to be fatally harmful in this patient group.
To what extent can statistical language knowledge account for the effects of world knowledge in language comprehension? We address this question by focusing on a core aspect of language understanding: pronoun resolution. While existing studies suggest that comprehenders use world knowledge to resolve pronouns, the distributional hypothesis and its operationalization in large language models (LLMs) provide an alternative account of how purely linguistic information could drive apparent world knowledge effects. We addressed these confounds in two experiments. In Experiment 1, we found a strong effect of world knowledge plausibility (measured using a norming study) on responses to comprehension questions that probed pronoun interpretation. In experiment 2, participants were slower to read continuations that contradicted world knowledge-consistent interpretations of a pronoun, implying that comprehenders deploy world knowledge spontaneously. Both effects persisted when controlling for the predictions of GPT-3, an LLM, suggesting that pronoun interpretation is at least partly driven by knowledge about the world and not the word. We propose two potential mechanisms by which knowledge-driven pronoun resolution occurs, based on validation- and expectation-driven discourse processes. The results suggest that while distributional information may capture some aspects of world knowledge, human comprehenders likely draw on other sources unavailable to LLMs.
Improving the quality and conduct of multi-center clinical trials is essential to the generation of generalizable knowledge about the safety and efficacy of healthcare treatments. Despite significant effort and expense, many clinical trials are unsuccessful. The National Center for Advancing Translational Science launched the Trial Innovation Network to address critical roadblocks in multi-center trials by leveraging existing infrastructure and developing operational innovations. We provide an overview of the roadblocks that led to opportunities for operational innovation, our work to develop, define, and map innovations across the network, and how we implemented and disseminated mature innovations.
Background: It has long been known that having a Severe Mental Health Condition is a risk factor for cardiovascular disease. In order to facilitate early intervention, NHS has implemented annual physical health reviews. Within Sussex Partnership Foundation Trust (SPFT), compliance with this is outlined within local guidance and an assessment on admission and thereafter six-monthly is mandatory and called ALL-Physical Health Assessment. Historically, completion of this has been poor and therefore, this audit has been done to review the quality of completion and whether ALL is UpToDate and implement changes to improve the care. The Categorisation of completion into green, amber, and red as errors are linked to potential harm to patient's care. The review of actions taken from areas highlighted as abnormal results.
Methods
This study was done within the setting of Pine Ward, a 17-bed male, inpatient, low-secure forensic psychiatric ward.
Data were collected in November 2022 by reviewing ALL-Physical Health Assessments (six-monthly physical health check) on Carenotes(an electronic record system) and evaluating the quality of completion by categorising it as green(no errors), amber(minor errors, potential for risk to patient care), and red(major error/ missing documentation, which can lead to serious harm). ALL has fourteen categories. Smoking, Diabetes, Cholesterol/HDL ratio, Blood pressure, Pulse, Body Mass Index, Diet, Exercise, Alcohol, Substance misuse, National screening programme, Sexual functioning, Oral health and QRISK. This was compared with the results from February 2022 ALL assessments.
Results
Of the 17 patients, 15(88%) had an ALL done in the last 6 months. When splitting completion of the ALL, 89.9% of completions were green, 4.6% amber and 5.5% red.
In February, overall 76.4% of patients had ALL done and 67.2% of completions were green, 15.5% amber and 17.2% were red.
Improvement was seen in QRISK, Alcohol, diet, and exercise status, as they were 100% documented in November whilst it was 70%, 58%, 82%, and 70% respectively in February. The diabetic and smoking status is now 82% and 88% whilst it was 58% and 76% in February.
Conclusion
This audit has highlighted that certain areas of the ALL that are not completed up to the standard expected. The importance of the assessment needs to be raised to trainees to allow for the best patient care. There is potential for harm to patients if the assessment is completed inaccurately or incorrectly.
Our aim was for 80% of new referrals for behaviours that challenge within Tower Hamlets Community Learning Disability service to have an MDT coordinated approach by July 2022. This followed concerns about disjointed care and long waits for therapeutic support when being referred between different MDT branches within the service having a negative impact on patient care.
Methods
An MDT project team was formed and weekly meetings were arranged. A driver diagram was created. Our primary outcome measure was determined: percentage of referred patients per week that had MDT coordinated assessments, with data being collected manually from electronic progress notes and MDT meeting minutes. Number of referrals per week was recorded as a process measure. Baseline data were added to the Life QI web platform upon collection, allowing generation of run charts for outcome and process measures. The time-frame over which referrals were recorded was changed from weekly to fortnightly, to help differentiate graphically between zero values resulting from the absence of MDT coordination and those resulting from no referrals being received on a given week. Attempts were made to obtain service user input via easy-read questionnaires and subsequent discussion in a service user participation group. A weekly Positive Behavioural Support meeting was set up and a Positive Behavioural Support database was established, and the combination of these changes simplified data collection and gave a focus to MDT working and collaboration for these service users. Data were recorded from 28/06/2021 to 03/07/2022 initially and subsequently extended to 06/11/2022 as part of a further PDSA cycle.
Results
A shift in proportion of service users referred with behaviour that challenges who had MDT involvement at the point of allocation was observed, to above the mean value of 0.5, commencing 07/02/2022, this shift was sustained until the project's endpoint. In terms of our process measure, the median number of new behaviour that challenges referrals per fortnightly period to psychiatry and psychology was one. This ranged from 0-4 referrals per fortnightly period, but no sustained change in this value was observed over the course of the project.
Conclusion
Implementing a new behaviour that challenges database and weekly meeting to focus on MDT coordinated working in those newly referred with behaviour that challenges has been successful in leading to a measurable and sustained improvement in the proportion of those service users receiving timely MDT coordinated care.
Individuals living with severe mental illness can have significant emotional, physical and social challenges. Collaborative care combines clinical and organisational components.
Aims
We tested whether a primary care-based collaborative care model (PARTNERS) would improve quality of life for people with diagnoses of schizophrenia, bipolar disorder or other psychoses, compared with usual care.
Method
We conducted a general practice-based, cluster randomised controlled superiority trial. Practices were recruited from four English regions and allocated (1:1) to intervention or control. Individuals receiving limited input in secondary care or who were under primary care only were eligible. The 12-month PARTNERS intervention incorporated person-centred coaching support and liaison work. The primary outcome was quality of life as measured by the Manchester Short Assessment of Quality of Life (MANSA).
Results
We allocated 39 general practices, with 198 participants, to the PARTNERS intervention (20 practices, 116 participants) or control (19 practices, 82 participants). Primary outcome data were available for 99 (85.3%) intervention and 71 (86.6%) control participants. Mean change in overall MANSA score did not differ between the groups (intervention: 0.25, s.d. 0.73; control: 0.21, s.d. 0.86; estimated fully adjusted between-group difference 0.03, 95% CI −0.25 to 0.31; P = 0.819). Acute mental health episodes (safety outcome) included three crises in the intervention group and four in the control group.
Conclusions
There was no evidence of a difference in quality of life, as measured with the MANSA, between those receiving the PARTNERS intervention and usual care. Shifting care to primary care was not associated with increased adverse outcomes.
The COVID-19 pandemic has presented a unique opportunity to understand how real-time pathogen genomics can be used for large-scale outbreak investigations. On 12 August 2021, the Australian Capital Territory (ACT) detected an incursion of the SARS-CoV-2 Delta (B.1.617.2) variant. Prior to this date, SARS-CoV-2 had been eliminated locally since 7 July 2020. Several public health interventions were rapidly implemented in response to the incursion, including a territory-wide lockdown and comprehensive contact tracing. The ACT has not previously used pathogen genomics at a population level in an outbreak response; therefore, this incursion also presented an opportunity to investigate the utility of genomic sequencing to support contact tracing efforts in the ACT. Sequencing of >75% of the 1793 laboratory-confirmed cases during the 3 months following the initial notification identified at least 13 independent incursions with onwards spread in the community. Stratification of cases by genomic cluster revealed that distinct cohorts were affected by the different incursions. Two incursions resulted in most of the community transmission during the study period, with persistent transmission in vulnerable sections of the community. Ultimately, both major incursions were successfully mitigated through public health interventions, including COVID-19 vaccines. The high rates of SARS-CoV-2 sequencing in the ACT and the relatively small population size facilitated detailed investigations of the patterns of virus transmission, revealing insights beyond those gathered from traditional contact tracing alone. Genomic sequencing was critical to disentangling complex transmission chains to target interventions appropriately.
COVID-19 has created many challenges for women in the perinatal phase. This stems from prolonged periods of lockdowns, restricted support networks and media panic, alongside altered healthcare provision.
Aims
We aimed to review the evidence regarding the psychological impact on new and expecting mothers following changes to antenatal and postnatal service provision within the UK throughout the pandemic.
Method
We conducted a narrative literature search of major databases (PubMed, Medline, Google Scholar). The literature was critically reviewed by experts within the field of antenatal and perinatal mental health.
Results
Changes to service provision, including the introduction of telemedicine services, attendance of antenatal appointments without partners or loved ones, and lack of support during the intrapartum period, are associated with increased stress, depression and anxiety. Encouraging women and their partners to engage with aspects of positive psychology through newly introduced digital platforms and virtual service provision has the potential to improve access to holistic care and increase mental well-being. An online course, designed by Imperial College Healthcare NHS Trust in response to changes to service provision, focuses on postnatal recovery inspiration and support for motherhood (PRISM) through a 5-week programme. So far, the course has received positive feedback.
Conclusions
The pandemic has contributed to increased rates of mental illness among pregnant and new mothers in the UK. Although the long-term implications are largely unpredictable, it is important to anticipate increased prevalence and complexity of symptoms, which could be hugely detrimental to an already overburdened National Health Service.
Lewy Body Dementia (LBD) is predicted to be under-diagnosed in the general population. RBD is one of the four core clinical criteria for the diagnosis of LBD. Longitudinal studies of RBD show strong association with LBD, so there is potential for early identification of LBD and subsequent management. We aimed to screen 100% of patients referred to Trafford MATS for RBD.
Methods
We performed three Plan-Do-Study-Act (PDSA) cycles; in the first cycle we introduced a validated RBD screening question, from the DIAMOND-Lewy study, to the initial memory assessment proforma. This asked ‘Have you ever been told that you “act out your dreams” while sleeping (punched or flailed arms in the air, shouted or screamed)?’
In the second PDSA cycle, we delivered a RBD and LBD educational package to the specialist memory nurses who undertake the initial assessments. In the third PDSA cycle reminders were sent to the team to use the new assessment proforma.
We collated data from patients who had undergone an initial memory assessment between 06/04/21- 22/06/21 from the trusts electronic database.
Results
Initial baseline data showed that 0% of initial assessments screened for RBD; at the end of PDSA one this was 100% and 75% at the end of PDSA two. This increased to 100% at the end of the last PDSA cycle. The main reason for non-completion of the screening question was use of the old proforma.
4/152 patients screened positive; patients were diagnosed with Alzheimer's disease, delirium, vascular dementia and mixed Alzheimer‘s disease and vascular dementia, respectively.
Conclusion
The introduction of a RBD screening question into the MATS initial assessment proforma improved screening for RBD. We think the variation in screening compliance rates was likely due to practitioners using old assessment proformas, hence sending reminders of the new proforma.
A limitation of the project was that some patients did not have a bed partner, which makes identification of the disorder more difficult.
Since the completion of the project, we have circulated a news bulletin through the Dementia United charity to raise awareness of our QI project nationally and also discussed the project with the Lewy Body society. Whilst our project has not yet identified a patient with LBD, we feel that introducing this screening question is a very easy and reproducible change to implement and RBD should be screened for in all memory patients.
Non-physician performed point-of-care ultrasound (POCUS) is emerging as a diagnostic adjunct with the potential to enhance current practice. The scope of POCUS utility is broad and well-established in-hospital, yet limited research has occurred in the out-of-hospital environment. Many physician-based studies expound the value of POCUS in the acute setting as a therapeutic and diagnostic tool. This study utilized a scoping review methodology to map the literature pertaining to non-physician use of POCUS to improve success of peripheral intravenous access (PIVA), especially in patients predicted to be difficult to cannulate.
Methods:
Ovid MEDLINE, CINAHL Plus, EMBASE, and PubMed were searched from January 1, 1990 through April 15, 2021. A thorough search of the grey literature and reference lists of relevant articles was also performed to identify additional studies. Articles were included if they examined non-physician utilization of ultrasound-guided PIVA (USGPIVA) for patients anticipated to be difficult to cannulate.
Results:
A total of 158 articles were identified. A total of 16 articles met the inclusion criteria. The majority of participants had varied experience with ultrasound, making accurate comparison difficult. Training and education were non-standardized, as was the approach to determining difficult intravenous access (DIVA). Despite this, the majority of the studies demonstrated high first attempt and overall success rates for PIVA performed by non-physicians.
Conclusion:
Non-physician USGPIVA appears to be a superior method for PIVA when difficulty is anticipated. Additional benefits include reduced requirement for central venous catheter (CVC) or intraosseous (IO) needle placement. Paramedics, nurses, and emergency department (ED) technicians are able to achieve competence in this skill with relatively little training. Further research is required to explore the utility of this practice in the out-of-hospital environment.
The use of ultrasound in the out-of-hospital environment is increasingly feasible. The potential uses for point-of-care ultrasound (POCUS) by paramedics are many, but have historically been limited to traumatic indications. This study utilized a scoping review methodology to map the evidence for the use of POCUS by paramedics to assess respiratory distress and to gain a broader understanding of the topic.
Methods:
Databases Ovid MEDLINE, EMBASE, CINAHL Plus, and PUBMED were searched from January 1, 1990 through April 14, 2021. Google Scholar was searched, and reference lists of relevant papers were examined to identify additional studies. Articles were included if they reported on out-of-hospital POCUS performed by non-physicians for non-traumatic respiratory distress.
Results:
A total of 591 unique articles were identified, of which seven articles met the inclusion criteria. The articles reported various different scan protocols and, with one exception, suffered from low enrolments and low participation. Most articles reported that non-physician-performed ultrasound was feasible. Articles reported moderate to high levels of agreement between paramedics and expert reviewers for scan interpretation in most studies.
Conclusion:
Paramedics and emergency medical technicians (EMTs) have demonstrated the feasibility of lung ultrasound in the out-of-hospital environment. Further research should investigate the utility of standardized education and scanning protocols in paramedic-performed lung ultrasound for the differentiation of respiratory distress and the implications for patient outcomes.
Global 21-cm experiments require exquisitely precise calibration of the measurement systems in order to separate the weak 21-cm signal from Galactic and extragalactic foregrounds as well as instrumental systematics. Hitherto, experiments aiming to make this measurement have concentrated on measuring this signal using the single element approach. However, an alternative approach based on interferometers with short baselines is expected to alleviate some of the difficulties associated with a single element approach such as precision modelling of the receiver noise spectrum. Short spacing Interferometer Telescope probing cosmic dAwn and epoch of ReionisAtion (SITARA) is a short spacing interferometer deployed at the Murchison Radio-astronomy Observatory (MRO). It is intended to be a prototype or a test-bed to gain a better understanding of interferometry at short baselines, and develop tools to perform observations and data calibration. In this paper, we provide a description of the SITARA system and its deployment at the MRO, and discuss strategies developed to calibrate SITARA. We touch upon certain systematics seen in SITARA data and their modelling. We find that SITARA has sensitivity to all sky signals as well as non-negligible noise coupling between the antennas. It is seen that the coupled receiver noise has a spectral shape that broadly matches the theoretical calculations reported in prior works. We also find that when appropriately modified antenna radiation patterns taking into account the effects of mutual coupling are used, the measured data are well modelled by the standard visibility equation.
To determine the impact of an inpatient stewardship intervention targeting fluoroquinolone use on inpatient and postdischarge Clostridioides difficile infection (CDI).
Design:
We used an interrupted time series study design to evaluate the rate of hospital-onset CDI (HO-CDI), postdischarge CDI (PD-CDI) within 12 weeks, and inpatient fluoroquinolone use from 2 years prior to 1 year after a stewardship intervention.
Setting:
An academic healthcare system with 4 hospitals.
Patients:
All inpatients hospitalized between January 2017 and September 2020, excluding those discharged from locations caring for oncology, bone marrow transplant, or solid-organ transplant patients.
Intervention:
Introduction of electronic order sets designed to reduce inpatient fluoroquinolone prescribing.
Results:
Among 163,117 admissions, there were 683 cases of HO-CDI and 1,104 cases of PD-CDI. In the context of a 2% month-to-month decline starting in the preintervention period (P < .01), we observed a reduction in fluoroquinolone days of therapy per 1,000 patient days of 21% after the intervention (level change, P < .05). HO-CDI rates were stable throughout the study period. In contrast, we also detected a change in the trend of PD-CDI rates from a stable monthly rate in the preintervention period to a monthly decrease of 2.5% in the postintervention period (P < .01).
Conclusions:
Our systemwide intervention reduced inpatient fluoroquinolone use immediately, but not HO-CDI. However, a downward trend in PD-CDI occurred. Relying on outcome measures limited to the inpatient setting may not reflect the full impact of inpatient stewardship efforts.
Background: Effective inpatient stewardship initiatives can improve antibiotic prescribing, but impact on outcomes like Clostridioides difficile infections (CDIs) is less apparent. However, the effect of inpatient stewardship efforts may extend to the postdischarge setting. We evaluated whether an intervention targeting inpatient fluoroquinolone (FQ) use in a large healthcare system reduced incidence of postdischarge CDI. Methods: In August 2019, 4 acute-care hospitals in a large healthcare system replaced standalone FQ orders with order sets containing decision support. Order sets redirected prescribers to syndrome order sets that prioritize alternative antibiotics. Monthly patient days (PDs) and antibiotic days of therapy (DOT) administered for FQs and NHSN-defined broad-spectrum hospital-onset (BS-HO) antibiotics were calculated using patient encounter data for the 23 months before and 13 months after the intervention (COVID-19 admissions in the previous 7 months). We evaluated hospital-onset CDI (HO-CDI) per 1,000 PD (defined as any positive test after hospital day 3) and 12-week postdischarge (PDC- CDI) per 100 discharges (any positive test within healthcare system <12 weeks after discharge). Interrupted time-series analysis using generalized estimating equation models with negative binomial link function was conducted; a sensitivity analysis with Medicare case-mix index (CMI) adjustment was also performed to control for differences after start of the COVID-19 pandemic. Results: Among 163,117 admissions, there were 683 HO-CDIs and 1,009 PDC-CDIs. Overall, FQ DOT per 1,000 PD decreased by 21% immediately after the intervention (level change; P < .05) and decreased at a consistent rate throughout the entire study period (−2% per month; P < .01) (Fig. 1). There was a nonsignificant 5% increase in BS-HO antibiotic use immediately after intervention and a continued increase in use after the intervention (0.3% per month; P = .37). HO-CDI rates were stable throughout the study period, with a nonsignificant level change decrease of 10% after the intervention. In contrast, there was a reversal in the trend in PDC-CDI rates from a 0.4% per month increase in the preintervention period to a 3% per month decrease in the postintervention period (P < .01). Sensitivity analysis with adjustment for facility-specific CMI produced similar results but with wider confidence intervals, as did an analysis with a distinct COVID-19 time point. Conclusion: Our systemwide intervention using order sets with decision support reduced inpatient FQ use by 21%. The intervention did not significantly reduce HO-CDI but significantly decreased the incidence of CDI within 12 weeks after discharge. Relying on outcome measures limited to inpatient setting may not reflect the full impact of inpatient stewardship efforts and incorporating postdischarge outcomes, such as CDI, should increasingly be considered.
Logit and probit (L/P) models are a mainstay of binary time-series cross-sectional (BTSCS) analyses. Researchers include cubic splines or time polynomials to acknowledge the temporal element inherent in these data. However, L/P models cannot easily accommodate three other aspects of the data’s temporality: whether covariate effects are conditional on time, whether the process of interest is causally complex, and whether our functional form assumption regarding time’s effect is correct. Failing to account for any of these issues amounts to misspecification bias, threatening our inferences’ validity. We argue scholars should consider using Cox duration models when analyzing BTSCS data, as they create fewer opportunities for such misspecification bias, while also having the ability to assess the same hypotheses as L/P. We use Monte Carlo simulations to bring new evidence to light showing Cox models perform just as well—and sometimes better—than logit models in a basic BTSCS setting, and perform considerably better in more complex BTSCS situations. In addition, we highlight a new interpretation technique for Cox models—transition probabilities—to make Cox model results more readily interpretable. We use an application from interstate conflict to demonstrate our points.