We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The high cost of antimicrobials presents critical challenges for healthcare providers managing infections amidst the growing threat of antimicrobial resistance (AMR). High costs hinder access to necessary treatments, disproportionately affecting disadvantaged populations and exacerbating health disparities. High drug prices necessitate the use of less effective or more toxic alternatives, leading to suboptimal outcomes and prolonged hospitalizations. This, in turn, increases healthcare costs and undermines efforts to combat AMR. Equitable policies, national formularies, and cost caps for essential antimicrobials can ensure universal access to life-saving treatments and enable antimicrobial stewardship programs to ensure the best possible outcomes.
In RISE, TV46000 once monthly (q1m) or once every 2 months (q2m) significantly extended time to impending schizophrenia relapse. The current study (SHINE, NCT03893825) evaluated the long-term safety, tolerability, and effect of TV46000.
Methods
Patients completing RISE without relapse (rollover) or newly recruited (de novo) were eligible. The de novo and placebo rollover cohorts were randomized 1:1 to q1m or q2m for ≤56 weeks; the TV46000 rollover cohort continued assigned regimen. Exploratory efficacy endpoints included time to impending relapse and patient centered outcomes (PCOs) including Schizophrenia Quality of Life Scale (SQLS).
Results
334 patients were randomized and received TV46000 q1m (n=172) or q2m (n=162), for 202.3 patient-years [PY] of TV-46000 treatment. Treatment-emergent adverse events (AEs) reported for ≥5% of patients were: overall–injection site pain (event rate/100 PY, n [%]; 23.23, 16 [5%]); de novo (n=109)–injection site pain (56.10, 11 [10%]), injection site nodule (16.03, 6 [6%]), blood creatine phosphokinase increased (16.03, 8 [7%]), urinary tract infection (10.69, 7 [6%]); placebo rollover (n=53)–tremor (18.50, 5 [9%]); TV46000 rollover (n=172)–headache (7.97, n=8 [5%]). Serious AEs reported for ≥2 patients were worsening schizophrenia and hyperglycemia. Kaplan– Meier estimates for remaining relapse-free at week 56 were 0.98 (2% risk; q1m) and 0.88 (12%; q2m). SQLS improved for q1m (least-squares mean change [SE], − 2.16 [0.98]) and q2m (− 0.43 [0.98]); other PCOs (5Level EuroQoL 5Dimensions Questionnaire, Personal and Social Performance Scale, Drug Attitudes Inventory 10-item version) remained stable.
Conclusions
TV-46000 had a favorable long-term benefit–risk profile in patients with schizophrenia.
Due to different timing of drug launches across countries, published health technology assessment (HTA) findings from one country may impact HTA outcomes in other countries. The aim of our work was to identify the most influential HTA bodies by analyzing to what extent HTA bodies cross-reference each other in their HTA reports.
Methods
We analyzed the HTA reports on single drug assessments (SDA) published by 46 HTA bodies from 28 countries (and cross-country collaborations) with decision dates between January 2011 and November 2023. We searched the identified HTA reports by using natural language processing and a predefined set of keywords to identify whether, and to what extent, HTA bodies reference each other in their HTA reports. Additionally, we assessed if there is a trend over time in the cross-referencing, and whether any clusters could be identified.
Results
Based on the analysis of 24,793 SDAs, the National Institute for Health and Care Excellence (NICE) was referenced the most (in 4,198 HTA reports across 39 HTA bodies), followed by the Canadian Agency for Drugs and Technologies in Health (in 2,034 reports across 35 HTA bodies), and the Scottish Medicines Consortium (SMC) (in 1,960 reports across 31 HTA bodies). The HTA bodies that most often referenced other HTAs were the Agency for Health Technology Assessment and Tariff System, the Haute Autorité de santé, and NICE. Seven HTA bodies were not referenced in any HTA report, while four did not reference any other HTA body.
Conclusions
Our research shows that most of the analyzed HTA agencies not only referenced other HTA bodies in their HTA reports but were also referenced by other HTA bodies. The most often referenced HTA agencies were mostly from English-speaking countries, were well recognized, and had well defined methodologies.
Background: Interest in artificial intelligence (AI) and machine learning (ML) has been growing in neuroradiology, but there is limited knowledge on how this interest has manifested into research and the field’s trends, challenges, and future directions. Methods: The American Journal of Neuroradiology was queried for original research articles published since inception (Jan. 1, 1980) to Sept. 19, 2022 that contained any of the following key terms: “machine learning”, “artificial intelligence”, or “radiomics”. Articles were screened, categorized into Statistical Modelling (Type 1), AI/ML Development (Type 2), or End-user Application (Type 3) and then bibliometrically analyzed. Results: A total of 124 articles were identified with 85% being non-integration focused (Type 1 n = 41, Type 2 n = 65) and the remaining (n = 18) being Type 3. The total number of articles published grew two-fold in the last five years, with Type 2 articles mainly driving this growth. While most (66%) Type 2 articles were led by a radiologist with 55% possessing a postgraduate degree, a minority of Type 2 articles addressed bias (15%) and explainability (20%). Conclusions: The results of this study highlight areas for improvement but also strengths that stakeholders can consider when promoting the shift towards integrating practical AI/ML solutions in neuroradiology.
Glauconite from the oxidized and reduced zones of soil-geologic columns at two Coastal Plain sites, one in Maryland and one in New Jersey, was examined by Mössbauer spectroscopy. The data indicate that glauconite in the reduced zones had a higher proportion of its structural iron in the ferrous, as opposed to the ferric state. The Fe2+/Fe3+ ratio ranged from 0 to 0.2 for the glauconite from the oxidized zone and was about 0.35 for the glauconite in the reduced zones. Despite the presence of pyrite in the reduced zone, which might be expected to make ferric iron unstable because of the presence of sulfide S, about 75% of the Fe in the glauconite in the reduced zone was in the ferric state. Thin section analysis showed some glauconite in the reduced zones to be intimately associated with pyrite and some aggregates of fine pyrite crystals were locally present in cracks in glauconite pellets. In the oxidized zones, pyrite was absent and the glauconite was more yellow under plane-polarized light, as opposed to more green for the glauconite in the reduced zones. These data indicate that reports of studies of glauconite should stipulate whether samples are from the oxidized or reduced zone of soil-geologic columns.
In 2023, prospection of a dried-out lake near Papowo Biskupie in north-central Poland identified substantial deposits of bronze artefacts. Excavation revealed further deposits and dozens of human skeletons that date from 1000–400 BC, suggesting that the site held particular significance as a place for sacrificial offerings in the Lusatian culture.
The role of lay health workers in data collection for clinical and translational research studies is not well described. We explored lay health workers as data collectors in clinical and translational research studies. We also present several methods for examining their work, i.e., qualitative interviews, fidelity checklists, and rates of unusable/missing data.
Methods:
We conducted 2 randomized, controlled trials that employed lay health research personnel (LHR) who were employed by community-based organizations. In one study, n = 3 Latina LHRs worked with n = 107 Latino diabetic participants. In another study, n = 6 LHR worked with n = 188 Cambodian American refugees with depression. We investigated proficiency in biological, behavioral, and psychosocial home-based data collection conducted by LHR. We also conducted in-depth interviews with lay LHR to explore their experience in this research role. Finally, we described the training, supervision, and collaboration for LHR to be successful in their research role.
Results:
Independent observers reported a very high degree of fidelity to technical data collection protocols (>95%) and low rates of missing/unusable data (1.5%–11%). Qualitative results show that trust, training, communication, and supervision are key and that LHR report feeling empowered by their role. LHR training included various content areas over several weeks with special attention to LHR and participant safety. Training and supervision from both the academic researchers and the staff at the community-based organizations were necessary and had to be well-coordinated.
Conclusions:
Carefully selected, trained, and supervised LHRs can collect sophisticated data for community-based clinical and translational research.
Routine patient care data are increasingly used for biomedical research, but such “secondary use” data have known limitations, including their quality. When leveraging routine care data for observational research, developing audit protocols that can maximize informational return and minimize costs is paramount.
Methods:
For more than a decade, the Latin America and East Africa regions of the International epidemiology Databases to Evaluate AIDS (IeDEA) consortium have been auditing the observational data drawn from participating human immunodeficiency virus clinics. Since our earliest audits, where external auditors used paper forms to record audit findings from paper medical records, we have streamlined our protocols to obtain more efficient and informative audits that keep up with advancing technology while reducing travel obligations and associated costs.
Results:
We present five key lessons learned from conducting data audits of secondary-use data from resource-limited settings for more than 10 years and share eight recommendations for other consortia looking to implement data quality initiatives.
Conclusion:
After completing multiple audit cycles in both the Latin America and East Africa regions of the IeDEA consortium, we have established a rich reference for data quality in our cohorts, as well as large, audited analytical datasets that can be used to answer important clinical questions with confidence. By sharing our audit processes and how they have been adapted over time, we hope that others can develop protocols informed by our lessons learned from more than a decade of experience in these large, diverse cohorts.
To measure the impact of an automated hand hygiene monitoring system (AHHMS) and an intervention program of complementary strategies on hand hygiene (HH) performance in both acute-care and long-term care (LTC) units.
Single Veterans Affairs Medical Center (VAMC), with 2 acute-care units and 6 LTC units.
Methods:
An AHHMS that provides group HH performance rates was implemented on 8 units at a VAMC from March 2021 through April 2022. After a 4-week baseline period and 2.5-week washout period, the 52-week intervention period included multiple evidence-based components designed to improve HH compliance. Unit HH performance rates were expressed as the number of dispenses (events) divided by the number of patient room entries and exits (opportunities) × 100. Statistical analysis was performed with a Poisson general additive mixed model.
Results:
During the 4-week baseline period, the median HH performance rate was 18.6 (95% CI, 16.5–21.0) for all 8 units. During the intervention period, the median HH rate increased to 21.6 (95% CI, 19.1–24.4; P < .0001), and during the last 4 weeks of the intervention period (exactly 1 year after baseline), the 8 units exhibited a median HH rate of 25.1 (95% CI, 22.2–28.4; P < .0001). The median HH rate increased from 17.5 to 20.0 (P < .0001) in LTC units and from 22.9 to 27.2 (P < .0001) in acute-care units.
Conclusions:
The intervention was associated with increased HH performance rates for all units. The performance of acute-care units was consistently higher than LTC units, which have more visitors and more mobile veterans.
It has been previously identified that levels of peripheral inflammatory proteins, such as cytokines, are altered in people with schizophrenia spectrum disorders (SSD).
Objectives
As there is considerable inconsistency in the literature with respect to how inflammatory profiles differ between acute and chronic stages of SSD, a systematic review and network meta-analysis was performed.
Methods
Records from CINAHL, the Cochrane Central Register of Controlled Trials, EMBASE, PubMed, and PsycINFO were systematically searched from inception until 31 March 2022 for published studies that had measured levels of inflammatory proteins in cases of SSD and healthy controls. Pairwise and network meta-analyses were performed to determine whether there were significant differences in mean peripheral protein concentrations between acute SSD, chronic SSD, and healthy controls.
Results
After application of the screening process, 215 articles were included for data-analysis. One group of markers were consistently elevated (p<0·05) in both acute and chronic SSD, relative to healthy controls; this group comprised interleukin (IL)-1β, IL-1 receptor antagonist (IL-1RA), soluble interleukin-2 receptor (sIL-2R), IL-6, IL-8, IL-10, tumor necrosis factor (TNF)-α, and high sensitivity C-reactive protein (hsCRP). A second group of markers were inconsistently altered between illness stages: IL-2 and interferon (IFN)-γ were significantly elevated (p<0·05) in acute SSD, whilst IL-4, IL-12 and IFN-γ were significantly decreased (p<0·05) in chronic SSD.
Conclusions
These results indicate that a baseline level of inflammatory protein alteration occurs in SSD throughout the course of illness. This was evident from the group of markers that were consistently elevated in acute and chronic SSD (e.g., IL-6), representing possible trait markers. Moreover, superimposed immune activity may occur in acute SSD, given the group of possible state markers that were increased only in acute illness (e.g., IFN-γ). Further research is required to elucidate whether these peripheral changes are reflected within the central nervous system.
In the few weight loss studies assessing diet quality, improvements have been minimal and recommended calculation methods have not been used. This secondary analysis of a parallel group randomised trial (regsitered: https://clinicaltrials.gov/ct2/show/NCT03367936) assessed whether self-monitoring with feedback (SM + FB) v. self-monitoring alone (SM) improved diet quality. Adults with overweight/obesity (randomised: SM n 251, SM + FB n 251; analysed SM n 170, SM + FB n 186) self-monitored diet, physical activity and weight. Real-time, personalised feedback, delivered via a study-specific app up to three times daily, was based on reported energy, fat and added sugar intake. Healthy Eating Index 2015 (HEI-2015) scores were calculated from 24-hour recalls. Higher scores represent better diet quality. Data were collected August 2018 to March 2021 and analysed spring 2022. The sample was mostly female (78·9 %) and white (85·4 %). At baseline, HEI-2015 total scores and bootstrapped 95 % CI were similar by treatment group (SM + FB: 63·11 (60·41, 65·24); SM: 61·02 (58·72, 62·81)) with similar minimal improvement observed at 6 months (SM + FB: 65·42 (63·30, 67·20); SM: 63·19 (61·22, 64·97)) and 12 months (SM + FB: 63·94 (61·40, 66·29); SM: 63·56 (60·81, 65·42)). Among those who lost ≥ 5 % of baseline weight, HEI-2015 scores improved (baseline: 62·00 (58·94, 64·12); 6 months: 68·02 (65·41, 71·23); 12 months: 65·93 (63·40, 68·61)). There was no effect of the intervention on diet quality change. Clinically meaningful weight loss was related to diet quality improvement. Feedback may need to incorporate more targeted nutritional content.
Background: We aim to assess the effect of simultaneous acute code stroke activation(ACSA) in patients undergoing reperfusion therapies in the emergency department on home time at 90 days. Methods: We assessed ACSA over 20 months from the QuICR(Quality Improvement and Clinical Research Alberta Stroke Program) Registry. We defined Simultaneous reperfusion therapy as, ACSA within 60 min of the arrival of any patient receiving intravenous thrombolysis or ACSA within 150 min of the arrival of any patient receiving endovascular thrombectomy (based on the Canadian Triage and Acuity Scale, average localdoor-to-needle and door-to-puncture times)Results: A total of 2607 ACSA occurred at a mean±SD of 130.8±17.1 per month during the study period. 545 (20.9%) underwent acute reperfusion therapy with a mean age of 70.6±14.2 years, 45.9%(n=254) were female and a median (IQR) NIHSS of 13(8-18). Simultaneous reperfusion therapies occurred in 189(34.6%). There was no difference in the median door-to-CT time between the simultaneous (16, 11-23 min) and non-simultaneous (15, 11–21 min, p=0.3) activations. There was no difference in the median home time at 90 days between the two groups. Conclusions: Simultaneous ACSA occurs in one-third of patients receiving acute reperfusion therapies. An optimal workflow may help mitigate the clinical and system burden associated with simultaneity.
Precision Medicine is an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle. Autoimmune diseases are those in which the body’s natural defense system loses discriminating power between its own cells and foreign cells, causing the body to mistakenly attack healthy tissues. These conditions are very heterogeneous in their presentation and therefore difficult to diagnose and treat. Achieving precision medicine in autoimmune diseases has been challenging due to the complex etiologies of these conditions, involving an interplay between genetic, epigenetic, and environmental factors. However, recent technological and computational advances in molecular profiling have helped identify patient subtypes and molecular pathways which can be used to improve diagnostics and therapeutics. This review discusses the current understanding of the disease mechanisms, heterogeneity, and pathogenic autoantigens in autoimmune diseases gained from genomic and transcriptomic studies and highlights how these findings can be applied to better understand disease heterogeneity in the context of disease diagnostics and therapeutics.
Although clozapine is the most efficacious medication for treatment-refractory schizophrenia, not all patients will have an adequate response. Optimising clozapine dose using therapeutic drug monitoring could therefore maximise response.
Aims
Using individual patient data, we undertook a receiver operating characteristic (ROC) curve analysis to determine an optimal therapeutic range for clozapine levels to guide clinical practice.
Method
We conducted a systematic review of PubMed, PsycINFO and Embase for studies that provided individual participant level data on clozapine levels and response. These data were analysed using ROC curves to determine the prediction performance of plasma clozapine levels for treatment response.
Results
We included data on 294 individual participants from nine studies. ROC analysis yielded an area under the curve of 0.612. The clozapine level at the point of optimal diagnostic benefit was 372 ng/mL; at this level, the response sensitivity was 57.3%, and specificity 65.7%. The interquartile range for treatment response was 223–558 ng/mL. There was no improvement in ROC performance with mixed models including patient gender, age or length of trial. Clozapine dose and clozapine concentration to dose ratio did not provide significantly meaningful prediction of response to clozapine.
Conclusions
Clozapine dose should be optimised based on clozapine therapeutic levels. We found that a range between 250 and 550 ng/mL could be recommended, while noting that a level of >350 ng/mL is the most optimal for response. Although some patients may not respond without clozapine levels >550 ng/mL, the benefits should be weighed against the increased risk of adverse drug reactions.
The US craft brewery industry has grown steadily in recent years before the 2020 COVID-19 pandemic. The majority of small, independently owned craft breweries rely on tasting rooms for revenues and profits. Using data collected from a survey of tasting room visitors from 21 craft breweries in New York, this research investigates factors influencing visitors’ customer satisfaction (CS) and the link between brewery tasting room CS and sales performance. The results show that brewery interior ambience, beer tasting execution, and friendliness and knowledge of servers are the main factors influencing CS in tasting rooms. Furthermore, results suggest that higher CS levels increase visitors’ purchase likelihood and beer purchase amounts (by volume and value). These findings indicate that breweries should focus on such factors as strengthening staff training, enhancing tasting room ambience, and improving beer tasting execution that have the highest positive influence on CS to increase sales. This study has implications for the rapidly growing craft brewery industry in the USA.
Political debates are structured by underlying conflict dimensions, such as left-right and economic and cultural ideology, which form the basis for voter choice and party competition. However, we know little about how voters arrive at perceptions of parties' positions on these dimensions. We examine how the emphasis parties place on the different issues that make up a higher-level ideological dimension affects perceptions of their position on that dimension. Using two population-based survey experiments, we present respondents with either short or long statements that communicate the same issue stances. We then test whether the length of statements affects positional perceptions on the higher-level dimension. The empirical results show support for our hypotheses and imply that political parties – and the context in which they compete – can affect their perceived position even if underlying issue stances remain stable.
To determine how engagement of the hospital and/or vendor with performance improvement strategies combined with an automated hand hygiene monitoring system (AHHMS) influence hand hygiene (HH) performance rates.
The study was conducted in 58 adult and pediatric inpatient units located in 10 hospitals.
Methods:
HH performance rates were estimated using an AHHMS. Rates were expressed as the number of soap and alcohol-based hand rub portions dispensed divided by the number of room entries and exits. Each hospital self-assigned to one of the following intervention groups: AHHMS alone (control group), AHHMS plus clinician-based vendor support (vendor-only group), AHHMS plus hospital-led unit-based initiatives (hospital-only group), or AHHMS plus clinician-based vendor support and hospital-led unit-based initiatives (vendor-plus-hospital group). Each hospital unit produced 1–2 months of baseline HH performance data immediately after AHHMS installation before implementing initiatives.
Results:
Hospital units in the vendor-plus-hospital group had a statistically significant increase of at least 46% in HH performance compared with units in the other 3 groups (P ≤ .006). Units in the hospital only group achieved a 1.3% increase in HH performance compared with units that had AHHMS alone (P = .950). Units with AHHMS plus other initiatives each had a larger change in HH performance rates over their baseline than those in the AHHMS-alone group (P < 0.001).
Conclusions:
AHHMS combined with clinician-based vendor support and hospital-led unit-based initiatives resulted in the greatest improvements in HH performance. These results illustrate the value of a collaborative partnership between the hospital and the AHHMS vendor.
Background: Clinical outcomes following childhood arterial ischaemic stroke (AIS) depend on age at the time of stroke, infarct size and location. However, other important variables including health inequity and stroke onset to arrival times remain inadequately addressed. This study reported trends in health inequity and stroke onset to arrival times along with proximity to a stroke centre in Canada. Methods: Childhood AIS patients (N=234) with stroke onset between 2004-2019 at a Level 2 (comprehensive) stroke centre were included. Measures of material deprivation included household income, education, single-parent families, and housing quality. Patients were stratified into 3 cohorts (by date of stroke onset) and postal codes were categorized as minimal, moderate, or most deprived neighbourhoods. Results: Over the 16-year period, an increasing number of patients arrived from the most deprived neighbourhoods. Although, there was no significant association between material deprivation and stroke onset to arrival time, an increasing number of patients presented within 6 hours of stroke onset (χ2 = 13.8, p = 0.008). Furthermore, most patients arrived from urban neighbourhoods. Conclusions: The faster stroke onset to arrival trend is encouraging, however, material deprivation trends are concerning. Thus, future studies exploring post-stroke outcomes should consider material deprivation, stroke onset to arrival times, and geographical proximity.
To determine how pharmacists with formal antimicrobial stewardship program (ASP) responsibilities prioritize their time and pharmacists without formal antimicrobial stewardship program responsibilities contribute to ASP activities.
Design:
A nationwide survey.
Respondents:
Members of the American College of Clinical Pharmacy who subscribe to the following practice and research network e-mail listservs: infectious diseases, adult medicine, cardiology, critical care, hematology–oncology, immunology and transplantation, and pediatrics.
Methods:
A survey was distributed via listservs. Respondents were asked about their personal and institutional demographics and ASP activities.
Results:
In total, 245 pharmacists responded: 135 pharmacists with formal antimicrobial stewardship program responsibilities; 110 pharmacists without formal antimicrobial stewardship program responsibilities. Although most respondents had completed a general pharmacy residency (85%), only 20% had completed an infectious diseases (ID) specialty residency. Among pharmacists with formal antimicrobial stewardship program responsibilities, one-third had no formal training or certification in ID or ASP. Pharmacists without formal antimicrobial stewardship program responsibilities spent ∼12.5% of their time per week on ASP activities, whereas pharmacists with formal antimicrobial stewardship program responsibilities spent 28% of their time performing non-ASP activities. Pharmacists with formal antimicrobial stewardship program responsibilities were more likely than pharmacists without formal antimicrobial stewardship program responsibilities to perform antibiotic guideline development (P < .001), antibiotic-related education (P = .002), and direct notification of rapid diagnostic results (P = .018). Pharmacists with formal antimicrobial stewardship program responsibilities without formal ID training or certification spent less time on ASP activities and were more likely to perform lower-level interventions.
Conclusions:
Many ASP activities are being performed by pharmacists without formal ID training. To ensure the future success of ASPs, pharmacists with formal antimicrobial stewardship program responsibilities should have adequate training to meet more advanced metrics, and more pharmacists without formal antimicrobial stewardship program responsibilities should be included in basic interventions.
Chinese morphological awareness is conceptualized as a multidimensional construct but there is a lack of understanding of how its dimensions are related. Latent change score modeling was used to examine the bivariate relationships of two facets of oral morphological awareness, namely morpheme and structure awareness in Chinese children in grades one through three. Two hundred and three children in China completed morpheme (homonym awareness) and structure awareness (lexical compounding) tasks across the three grades (M = 6.66, SD = .30 at the first time point). Results indicated that growth in structure awareness was predicted in part by previous levels of morpheme awareness, suggesting that morpheme awareness leads the growth of structure awareness. Educational implications are discussed.