We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Prospective and early-career paleontologists deserve an accurate assessment of employment opportunities in their chosen field of study. Drawing on a wide range of sources, we have produced an admittedly incomplete analysis of the current status and recent trends of permanent academic employment in the discipline. Obtaining more complete longitudinal data on employment trends is a major difficulty; this is a challenge that needs to be addressed. The number of job seekers is far in excess of available positions. There has been a clear erosion in the number of academic paleontologists in the United States, a trend exacerbated in recent years. The decline, in constant dollars, of federal funding for paleontological research has potential strong negative impacts on future hiring. The loss of paleontology positions has also had a deleterious effect on our professional societies, which have seen a loss of regular (professional) membership, although student membership remains strong. These trends also potentially negatively impact efforts to diversify the field. Professional societies need to better coordinate their efforts to address these serious issues. Individual paleontologists also must become more effective advocates for the importance and relevance of our science.
Herbaceous perennials must annually rebuild the aboveground photosynthetic architecture from carbohydrates stored in crowns, rhizomes, and roots. Knowledge of carbohydrate utilization and storage can inform management decisions and improve control outcomes for invasive perennials. We monitored the nonstructural carbohydrates in a population of the hybrid Bohemian knotweed [Polygonum ×bohemicum (J. Chrtek & Chrtková) Zika & Jacobson [cuspidatum × sachalinense]; syn.: Fallopia ×bohemica (Chrtek and Chrtková) J.P. Bailey] and in Japanese knotweed [Polygonum cuspidatum Siebold & Zucc.; syn.: Fallopia japonica (Houtt.) Ronse Decr.]. Carbohydrate storage in crowns followed seasonal patterns typical of perennial herbaceous dicots corresponding to key phenological events. Starch was consistently the highest nonstructural carbohydrate present. Sucrose levels did not show a consistent inverse relationship with starch levels. Lateral distribution of starch in rhizomes and, more broadly, total nonstructural carbohydrates sampled before dormancy break showed higher levels in rhizomes compared with crowns. Total nonstructural carbohydrate levels in crowns reached seasonal lows at an estimated 22.6% of crown dry weight after accumulating 1,453.8 growing degree days (GDD) by the end of June, mainly due to depleted levels of stored starch, with the estimated minimum of 12.3% reached by 1,220.3 GDD accumulated by mid-June. Depletion corresponded to rapid development of vegetative canopy before entering the reproductive phase in August. Maximum starch accumulation in crowns followed complete senescence of aboveground tissues by mid- to late October. Removal of aboveground shoot biomass in late June to early July with removal of vegetation regrowth in early September before senescence would optimize the use of time and labor to deplete carbohydrate reserves. Additionally, foliar-applied systemic herbicide translocation to belowground tissue should be maximized with applications in late August through early fall to optimize downward translocation with assimilate movement to rebuild underground storage reserves. Fall applications should be made before loss of healthy leaf tissue, with the window for control typically ending by late September in Minnesota.
Carbapenem-resistant Enterobacterales (CRE) are an urgent threat to healthcare, but the epidemiology of these antimicrobial-resistant organisms may be evolving in some settings since the COVID-19 pandemic. An updated analysis of hospital-acquired CRE (HA-CRE) incidence in community hospitals is needed.
Methods:
We retrospectively analyzed data on HA-CRE cases and antimicrobial utilization (AU) from two community hospital networks, the Duke Infection Control Outreach Network (DICON) and the Duke Antimicrobial Stewardship Outreach Network (DASON) from January 2013 to June 2023. The zero-inflated negative binomial regression model was used owing to excess zeros.
Results:
126 HA-CRE cases from 36 hospitals were included in the longitudinal analysis. The pooled incidence of HA CRE was 0.69 per 100,000 patient days (95% confidence interval [95% CI], 0.57–0.82 HA-CRE rate significantly decreased over time before COVID-19 (rate ratio [RR], 0.94 [95% CI, 0.89–0.99]; p = 0.02), but there was a significant slope change indicating a trend increase in HA-CRE after COVID-19 (RR, 1.32 [95% CI, 1.06–1.66]; p = 0.01). In 21 hospitals participating in both DICON and DASON from January 2018 to June 2023, there was a correlation between HA-CRE rates and AU for CRE treatment (Spearman’s coefficient = 0.176; p < 0.01). Anti-CRE AU did not change over time, and there was no level or slope change after COVID.
Conclusions:
The incidence of HA-CRE decreased before COVID-19 in a network of community hospitals in the southeastern United States, but this trend was disrupted by the COVID-19 pandemic.
Objectives: People with dementia live with unmet needs due to dementia and other conditions. The EMBED-Care Framework is a co-designed app-delivered intervention involving holistic assessment, evidence-based decision- support tools and resources to support its use. Its intention is to empower people with dementia, family and practitioners to assess, monitor and manage needs. We aimed to explore the feasibility and acceptability of the EMBED-Care Framework and develop its underpinning programme theory.
Methods: A six-month single arm mixed-Methods feasibility and process evaluation, underpinned by an initial programme theory which was iteratively developed from previous studies. The settings were two community teams and two long term care facilities (LTCFs). People with dementia and family were recruited to receive the intervention for 12 weeks. Practitioners were recruited to deliver the intervention for six months. Quantitative data included candidate process and outcome measures. Qualitative data comprised interviews, focus groups and observations with people with dementia, family and practitioners. Qualitative and quantitative data were analysed separately and triangulated at the interpretation phase.
Results: Twenty-six people with dementia, 25 family members and 40 practitioners were recruited. Practitioners in both settings recognized the potential benefit for improving care and outcomes for people with dementia, and to themselves in supporting care provision. Family in both settings perceived a role in informing assessment and decisions about care. Family was integral to the intervention in community teams but had limited involvement in LTCFs. In both settings, embedding the intervention into routine care processes was essential to support its use. In community teams, this required aligning app functionality with care processes, establishing processes to monitor alerts, and clarifying team responsibilities. In LTCFs, duplication of care processes and limited time to integrate the intervention into routine care processes, affected its acceptability.
Conclusions: A theoretically informed co-designed digital intervention has potential to improve care processes and outcomes for people with dementia and family, and is acceptable to practitioners in community teams. Further work is required to strengthen the intervention in LTCFs to support integration into care processes and support family involvement. The programme theory detailing key mechanisms and likely outcomes of the EMBED-Care Framework is presented.
Efficient evidence generation to assess the clinical and economic impact of medical therapies is critical amid rising healthcare costs and aging populations. However, drug development and clinical trials remain far too expensive and inefficient for all stakeholders. On October 25–26, 2023, the Duke Clinical Research Institute brought together leaders from academia, industry, government agencies, patient advocacy, and nonprofit organizations to explore how different entities and influencers in drug development and healthcare can realign incentive structures to efficiently accelerate evidence generation that addresses the highest public health needs. Prominent themes surfaced, including competing research priorities and incentives, inadequate representation of patient population in clinical trials, opportunities to better leverage existing technology and infrastructure in trial design, and a need for heightened transparency and accountability in research practices. The group determined that together these elements contribute to an inefficient and costly clinical research enterprise, amplifying disparities in population health and sustaining gaps in evidence that impede advancements in equitable healthcare delivery and outcomes. The goal of addressing the identified challenges is to ultimately make clinical trials faster, more inclusive, and more efficient across diverse communities and settings.
High rates of psychiatric comorbidities have been found in people with problem gambling (PBG), including substance use, anxiety, and mood disorders. Psychotic disorders have received less attention, although this comorbidity is expected to have a significant impact on the course, consequences, and treatment of PBG. This review aimed to estimate the prevalence of psychotic disorders in PBG.
Methods
Medline (Ovid), EMBASE, PsycINFO (Ovid), CINAHL, CENTRAL, Web of Science, and ProQuest were searched on November 1, 2023, without language restrictions. Studies involving people with PBG and reporting the prevalence of schizophrenia spectrum and other psychotic disorders were included. Risk of bias was assessed using the Joanna Briggs Institute critical appraisal checklist for systematic reviews of prevalence data. The pooled prevalence of psychotic disorders was calculated using a random effects generalized linear mixed model and presented with forest plots.
Results
Of 1,271 records screened, 22 studies (n = 19,131) were included. The overall prevalence of psychotic disorders was 4.9% (95% CI, 3.6–6.5%, I2 = 88%). A lower prevalence was found in surveyed/recruited populations, compared with treatment-seeking individuals and register-based studies. No differences were found for factors such as treatment setting (inpatient/outpatient), diagnoses of psychotic disorders (schizophrenia only/other psychotic disorders), and assessment time frame (current/lifetime). The majority of included studies had a moderate risk of bias.
Conclusions
These findings highlight the relevance of screening problem gamblers for schizophrenia spectrum and other psychotic disorders, as well as any other comorbid mental health conditions, given the significant impact such comorbidities can have on the recovery process.
Background: External comparisons of antimicrobial use (AU) may be more informative if adjusted for encounter characteristics. Optimal methods to define input variables for encounter-level risk-adjustment models of AU are not established. Methods: This retrospective analysis of electronic health record data included 50 US hospitals in 2020-2021. We used NHSN definitions for all antibacterials days of therapy (DOT), including adult and pediatric encounters with at least 1 day present in inpatient locations. We assessed 4 methods to define input variables: 1) diagnosis-related group (DRG) categories by Yu et al., 2) adjudicated Elixhauser comorbidity categories by Goodman et al., 3) all Clinical Classification Software Refined (CCSR) diagnosis and procedure categories, and 4) adjudicated CCSR categories where codes not appropriate for AU risk-adjustment were excluded by expert consensus, requiring review of 867 codes over 4 months to attain consensus. Data were split randomly, stratified by bed size as follows: 1) training dataset including two-thirds of encounters among two-thirds of hospitals; 2) internal testing set including one-third of encounters within training hospitals, and 3) external testing set including the remaining one-third of hospitals. We used a gradient-boosted machine (GBM) tree-based model and two-staged approach to first identify encounters with zero DOT, then estimate DOT among those with >0.5 probability of receiving antibiotics. Accuracy was assessed using mean absolute error (MAE) in testing datasets. Correlation plots compared model estimates and observed DOT among testing datasets. The top 20 most influential variables were defined using modeled variable importance. Results: Our datasets included 629,445 training, 314,971 internal testing, and 419,109 external testing encounters. Demographic data included 41% male, 59% non-Hispanic White, 25% non-Hispanic Black, 9% Hispanic, and 5% pediatric encounters. DRG was missing in 29% of encounters. MAE was lower in pediatrics as compared to adults, and lowest for models incorporating CCSR inputs (Figure 1). Performance in internal and external testing was similar, though Goodman/Elixhauser variable strategies were less accurate in external testing and underestimated long DOT outliers (Figure 2). Agnostic and adjudicated CCSR model estimates were highly correlated; their influential variables lists were similar (Figure 3). Conclusion: Larger numbers of CCSR diagnosis and procedure inputs improved risk-adjustment model accuracy compared with prior strategies. Variable importance and accuracy were similar for agnostic and adjudicated approaches. However, maintaining adjudications by experts would require significant time and potentially introduce personal bias. If findings are confirmed, the need for expert adjudication of input variables should be reconsidered.
Disclosure: Elizabeth Dodds Ashley: Advisor- HealthTrackRx. David J Weber: Consultant on vaccines: Pfizer; DSMB chair: GSK; Consultant on disinfection: BD, GAMA, PDI, Germitec
Background: Interventions targeting urine culture stewardship can improve diagnostic accuracy for urinary tract infections (UTI) and decrease inappropriate antibiotic treatment of asymptomatic bacteriuria. We aimed to determine if a clinical decision support (CDS) tool which provided guidance on and required documentation of the indications would decrease inappropriately ordered urine cultures in an academic healthcare network that already uses conditional (e.g. reflex) urine testing. Methods: In October 2022, four hospitals within one academic healthcare network transitioned to a new electronic health record (EHR). We developed an embedded CDS tool that provided guidance on ordering either a urinalysis (UA) with reflex to urine culture or a non-reflex urine culture (e.g. for pregnant patients) based on the indication for testing (Figure 1). We compared median monthly UA with reflex culture and non-reflex urine culture order rates pre- (8/2017–9/2022) and post- (10/2022–9/2023) intervention using the Wilcoxon rank-sum test. We used interrupted time-series analyses allowing a one-month time window for the intervention effect to assess changes in monthly UA with reflex culture, non-reflex urine culture, and total urine culture order rates associated with the intervention. Using SAS 9.4, we generated Durbin-Watson statistics to assess for autocorrelation and adjusted for this using a stepwise autoregressive model. Result: The median monthly UA with reflex culture order rates per 1000 patient-days were similar pre- and post- intervention at 36.7 (interquartile range [IQR]: 31.0–39.7) and 35.4 (IQR: 32.8–37.0), respectively (Figure 2). Non-reflex and total urine culture rates per 1000 patient-days decreased from 8.5 (IQR: 8.1–9.1) to 4.9 (IQR: 4.7–5.1) and from 20.0 (IQR: 18.9–20.7) to 14.4 (IQR: 14.0–14.6) post-intervention, respectively. Interrupted time-series analyses revealed that the intervention was associated with a decrease in the monthly non-reflex urine culture by 4.8 cultures/1000 patient-days (p< 0.001) and in the total urine culture monthly order rates by 5.0 cultures/ 1000 patient-days (p < 0 .001) [Figures 3a and b]. The UA with reflex order rate did not significantly change with the intervention (not pictured). Conclusion: In an academic healthcare network that already employed conditional urine testing, the implementation of an EHR-based diagnostic stewardship tool led to additional decreases in both non-reflex and total urine cultures ordered.
OBJECTIVES/GOALS: Identify causes for clinical research professional turnover Define data collection methods for exit interviews Provide institutions with resources to collect and analyze exit interviews Employ strategies to maximize the impact of exit interviews on retention METHODS/STUDY POPULATION: The Clinical Research Professional Taskforce (CRPT) exit interview Subgroup has met monthly since January 2023. Action items were agreed to and minutes were kept and reviewed at subsequent virtual working meetings. All members were given opportunity to speak and contribute. After a landscape analysis, conducted via survey, five institutions agreed to provide examples of their exit interview questions. Members spoke at length about goals, methods, collection techniques, institutional involvement, lessons learned and practical applications that could become best practices. RESULTS/ANTICIPATED RESULTS: The Subgroup aggregated all questions into categories and developed sample questions incorporating all data without using any word for word. In order to allow for quantitative assessment and standardized reporting the Subgroup formulated questions to be responded to utilizing a Likert scale with free text fields for select questions where further information is needed. The Subgroup developed best practices describing decision-making metrics, understanding reasons for turnover and reporting data back to leadership. Practical aspects such as method and time of survey collection, anonymity, and training staff are also included. DISCUSSION/SIGNIFICANCE: We are hopeful that sample questions and best practices will be helpful and widely utilized. Understanding the causes and impacts of CRP turnover are critical to meeting the current needs of clinical research. Further work is being done to calculate the cost of turnover to make the business case.
The aim of this study was to compare past New Zealand immunization strategies with the New Zealand coronavirus disease 2019 (COVID-19) immunization roll-out.
Methods:
Using the READ document analysis method, 2 New Zealand immunization strategies (for influenza and measles) were analyzed for how the disease, context, vaccine supply and demand, ethical principles (equity, individual autonomy, and maximizing benefits), and the Treaty of Waitangi impacted the immunization programs. The findings were compared with the ongoing COVID-19 mass immunization program in New Zealand, as of October 15, 2021.
Results:
Several themes common to the case-studies and the COVID-19 pandemic were identified including the importance of equity, obligations under the Treaty of Waitangi, ethical mandates, and preparedness.
Conclusions:
Future emergency planning should integrate learnings from other infectious disease responses and immunization programs to avoid repeating mistakes and to create better health outcomes. This study has provided a basis for ongoing research into how an appropriate immunization plan can be developed that incorporates ethical values, the Treaty of Waitangi (in the NZ context), and evidence-based research to increase trust, equity, health, and preparedness for future outbreaks.
The optimal duration of antipsychotic treatment following remission of first-episode psychosis (FEP) is uncertain, considering potential adverse effects and individual variability in relapse rates. This study aimed to investigate the effect of antipsychotic discontinuation compared to continuation on recovery in remitted FEP patients.
Methods
CENTRAL, MEDLINE (Ovid), Embase, and PsycINFO databases were searched on November 2, 2023, with no language restrictions. RCTs evaluating antipsychotic discontinuation in remitted FEP patients were selected. The primary outcome was personal recovery, and secondary outcomes included functional recovery, global functioning, hospital admission, symptom severity, quality of life, side effects, and employment. Risk of bias was assessed using the Cochrane risk-of-bias tool 2, and the certainty of evidence was evaluated with GRADE. Meta-analysis used a random-effect model with an inverse-variance approach.
Results
Among 2185 screened studies, 8 RCTs (560 participants) were included. No RCTs reported personal recovery as an outcome. Two studies measured functional recovery, and discontinuation group patients were more likely to achieve functional recovery (RR 2.19; 95% CIs: 1.13, 4.22; I2 = 0%; n = 128), although evidence certainty was very low. No significant differences were found in hospital admission, symptom severity, quality of life, global functioning, or employment between the discontinuation and continuation groups.
Conclusions
Personal recovery was not reported in any antipsychotic discontinuation trial in remitted FEP. The observed positive effect of discontinuation on functional recovery came from an early terminated trial and an RCT followed by an uncontrolled period. These findings should be interpreted cautiously due to very low certainty of evidence.
To determine whether residing in a hospital bed that previously held an occupant with Clostridioides difficile increases the risk of hospital-onset C. difficile infection (HO-CDI).
Methods:
In this retrospective cohort study, we used a real-time location system to track the movement of hospital beds in 2 academic hospitals from April 2018 to August 2019. We abstracted patient demographics, clinical characteristics, and C. difficile polymerase chain reaction (PCR) results from the medical record. We defined patients as being exposed to a potentially “contaminated” bed or room if, within the preceding 7 days from their HO-CDI diagnosis, they resided in a bed or room respectively, that held an occupant with C. difficile in the previous 90 days. We used multivariable logistic regression to determine whether residing in a contaminated bed was associated with HO-CDI after controlling for time at risk and requiring intensive care. We assessed mediation and interaction from a contaminated hospital room.
Results:
Of 25,032 hospital encounters with 18,860 unique patients, we identified 237 cases of HO-CDI. Exposure to a contaminated bed was associated with HO-CDI in unadjusted analyses (odds ratio [OR], 1.8; 95% confidence interval [CI], 1.4–2.31) and adjusted analyses (OR, 1.5; 95% CI, 1.2–2.0). Most of this effect was due to both mediation from and interaction with a contaminated hospital room.
Conclusions:
Residing in a hospital bed or room that previously had a patient with C. difficile increases the risk of HO-CDI. Increased attention to cleaning and disinfecting the healthcare environment may reduce hospital transmission of C. difficile.
State Medical Boards (SMBs) can take severe disciplinary actions (e.g., license revocation or suspension) against physicians who commit egregious wrongdoing in order to protect the public. However, there is noteworthy variability in the extent to which SMBs impose severe disciplinary action. In this manuscript, we present and synthesize a subset of 11 recommendations based on findings from our team’s larger consensus-building project that identified a list of 56 policies and legal provisions SMBs can use to better protect patients from egregious wrongdoing by physicians.
What is the work ethic? Does it justify policies that promote the wealth and power of the One Percent at workers' expense? Or does it advance policies that promote workers' dignity and standing? Hijacked explores how the history of political economy has been a contest between these two ideas about whom the work ethic is supposed to serve. Today's neoliberal ideology deploys the work ethic on behalf of the One Percent. However, workers and their advocates have long used the work ethic on behalf of ordinary people. By exposing the ideological roots of contemporary neoliberalism as a perversion of the seventeenth-century Protestant work ethic, Elizabeth Anderson shows how we can reclaim the original goals of the work ethic, and uplift ourselves again. Hijacked persuasively and powerfully demonstrates how ideas inspired by the work ethic informed debates among leading political economists of the past, and how these ideas can help us today.
Much of this book is a history of classical economic thought. Since the classical era, economists have developed more sophisticated analytical and empirical tools than anything deployed by the classical economists. Might contemporary economics therefore rightly claim to have left behind the assumptions of the work ethic, which were so deeply embedded in classical economic thought?
Marx’s conception of human flourishing is broadly Aristotelian: it consists in the exercise of essential human powers. The good life for humans thus includes the following components. It consists in activities that exercise a wide range of human capabilities. Such activity should be free, both voluntary and autonomous – self-directed, according to one’s own ideas. It is social, in the sense of promoting others’ welfare, and being motivated by that end. This activity is recognized by the agent and others as having this social character, such that it is common knowledge between the agent and beneficiaries that the agent acted for the beneficiaries’ sakes, and that the beneficiaries appreciate that fact.
Would you quit working if you won a lottery big enough to enable you to live comfortably off the annual payout? Numerous surveys of Americans since 1980 find that a majority say they would keep working. Of those Americans who have won huge lotteries, 85–90 percent do continue working. While the numbers are lower for people in low-paying unskilled jobs, these results reflect the continuing power of the Protestant work ethic in American life. Most Americans view work as something more than just a meal ticket. They view it as fulfilling a duty to contribute to society, as a source of pride, and as a locus of meaning.