We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We construct an explicit algebraic example of a subshift of finite type over a group $\Gamma $ with an invariant Markov measure which has completely positive sofic entropy (with respect to ‘most’ sofic approximations) and yet does not have a direct Bernoulli factor because its model spaces shatter into exponentially many clusters of sub-exponential size. The example and its analysis are related to random low-density parity-check (LDPC) codes.
Before COVID-19, breast cancer patients in the UK typically received 15 radiotherapy (RT) fractions over three weeks. During the pandemic, adoption of a 5-fraction treatment prescription and more advanced treatment techniques like surface-guided RT, meant a change in the duration and number of hospital visits for patients accessing treatment. This work sought to understand how breast cancer patients’ time in the RT department has changed, between 2018 and 2023.
Methods:
Appointments for CT simulation, mould room, and RT, from January 2018 to December 2023, were extracted from the Mosaiq® Oncology Management System. Appointments lasting between 5 minutes and 5 hours were analysed. Total visit time was calculated from check-in to completion on the quality checklist.
Results:
In total, 29,523 attendances were analysed over 6 years. Average time spent in the department decreased during the pandemic but has since increased 12·4% above pre-COVID-19 levels. Early morning and late afternoon appointments resulted in the shortest visits, with early afternoon appointments leading to the longest visits. On average, patients spend the longest in the department on a Monday, and the least amount of time on a Friday. Friday was the least common day to start a 15-fraction treatment, whereas Tuesday and Friday were equally uncommon for the 5-fraction regime.
Conclusions:
During the COVID-19 pandemic, the number of visits a patient makes for breast cancer RT and related services dropped, and remained lower post-COVID-19, due to fewer treatment fractions being prescribed. Average time spent in the department initially decreased but has since increased beyond pre-COVID-19 levels.
Medicare claims are frequently used to study Clostridioides difficile infection (CDI) epidemiology. However, they lack specimen collection and diagnosis dates to assign location of onset. Algorithms to classify CDI onset location using claims data have been published, but the degree of misclassification is unknown.
Methods:
We linked patients with laboratory-confirmed CDI reported to four Emerging Infections Program (EIP) sites from 2016–2021 to Medicare beneficiaries with fee-for-service Part A/B coverage. We calculated sensitivity of ICD-10-CM codes in claims within ±28 days of EIP specimen collection. CDI was categorized as hospital, long-term care facility, or community-onset using three different Medicare claims-based algorithms based on claim type, ICD-10-CM code position, duration of hospitalization, and ICD-10-CM diagnosis code presence-on-admission indicators. We assessed concordance of EIP case classifications, based on chart review and specimen collection date, with claims case classifications using Cohen’s kappa statistic.
Results:
Of 12,671 CDI cases eligible for linkage, 9,032 (71%) were linked to a single, unique Medicare beneficiary. Compared to EIP, sensitivity of CDI ICD-10-CM codes was 81%; codes were more likely to be present for hospitalized patients (93.0%) than those who were not (56.2%). Concordance between EIP and Medicare claims algorithms ranged from 68% to 75%, depending on the algorithm used (κ = 0.56–0.66).
Conclusion:
ICD-10-CM codes in Medicare claims data had high sensitivity compared to laboratory-confirmed CDI reported to EIP. Claims-based epidemiologic classification algorithms had moderate concordance with EIP classification of onset location. Misclassification of CDI onset location using Medicare algorithms may bias findings of claims-based CDI studies.
Racial justice is widely seen as a central moral and political ideal of our time, especially on the liberal-egalitarian left. And racial justice goes hand in hand with racial equality. The centrality of these ideals would be hard to justify if they had no bearing on material or economic inequality, or applied solely to semiotic and cultural issues. But we argue that, at present, the only plausible basis for understanding racial equality as a distinctive aim for the economic domain—rather than a mere implication of more general egalitarian or progressive principles—rests on minimal state, right-libertarian foundations. As such, racial equality is a strange focus for the left.
This chapter provides an overview of selected studies assessing technology-aided programs to promote independent leisure and communication or combinations of independent leisure, communication, and daily activities in people with mild to moderate intellectual disability often associated with sensory and/or motor impairments. The studies included in the overview offer an opportunity to describe the development of those programs, the technology solutions used to support them, and their outcomes in terms of participants’ independent performance. Following the presentation of the programs and their outcomes, the discussion focuses on three main issues: (a) effectiveness of the programs and methodological considerations, (b) accessibility and affordability of the programs, and (c) implications of the programs for professionals working in daily contexts. With regard to the last issue, an effort was made to examine ethical and moral questions that may accompany the possible decisions of professionals to adopt those programs in daily contexts.
Background: Medicare claims are frequently used to study Clostridioides difficile infection (CDI) epidemiology. Categorizing CDI based on location of onset and potential exposure is critical in understanding transmission patterns and prevention strategies. While claims data are well-suited for identifying prior healthcare utilization exposures, they lack specimen collection and diagnosis dates to assign likely location of onset. Algorithms to classify CDI onset and healthcare association using claims data have been published, but the degree of misclassification is unknown. Methods: We linked patients with laboratory-confirmed CDI reported to four Emerging Infections Program (EIP) sites from 2016-2020 to Medicare beneficiaries using residence, birth date, sex, and hospitalization and/or healthcare exposure dates. Uniquely linked patients with fee-for-service Medicare A/B coverage and complete EIP case report forms were included. Patients with a claims CDI diagnosis code within ±28 days of a positive CDI test reported to EIP were categorized as hospital-onset (HO), long-term care facility onset (LTCFO), or community-onset (CO, either healthcare facility-associated [COHCFA] or community-associated [CA]) using a previously published algorithm based on claim type, ICD-10-CM code position, and duration of hospitalization (if applicable). EIP classifies CDI into these categories using positive specimen collection date and other information from chart review (e.g. admit/discharge dates). We assessed concordance of EIP and claims case classifications using Cohen’s kappa. Results: Of 10,002 eligible EIP-identified CDI cases, 7,064 were linked to a unique beneficiary; 3,451 met Medicare A/B fee-for-service coverage inclusion criteria. Of these, 650 (19%) did not have a claims diagnosis code ±28 days of the EIP specimen collection date (Table); 48% (313/650) of those without a claims diagnosis code were categorized by EIP as CA CDI. Among those with a CDI diagnosis code, concurrence of claims-based and EIP CDI classification was 68% (κ=0.56). Concurrence was highest for HO and lowest for COHCFA CDI. A substantial number of EIP-classified CO CDIs (30%, Figure) were misclassified as HO using the claims-based algorithm; half of these had a primary ICD-10 diagnosis code of sepsis (226/454; 50%). Conclusions: Evidence of CDI in claims data was found for 81% of EIP-reported CDI cases. Medicare classification algorithms concurred with the EIP classification in 68% of cases. Discordance was most common for community-onset CDI patients, many of whom were hospitalized with a primary diagnosis of sepsis. Misclassification of CO-CDI as HO may bias findings of claims-based CDI studies.
Dementia is a common and progressive condition whose prevalence is growing worldwide. It is challenging for healthcare systems to provide continuity in clinical services for all patients from diagnosis to death.
Aims
To test whether individuals who are most likely to need enhanced care later in the disease course can be identified at the point of diagnosis, thus allowing the targeted intervention.
Method
We used clinical information collected routinely in de-identified electronic patient records from two UK National Health Service (NHS) trusts to identify at diagnosis which individuals were at increased risk of needing enhanced care (psychiatric in-patient or intensive (crisis) community care).
Results
We examined the records of a total of 25 326 patients with dementia. A minority (16% in the Cambridgeshire trust and 2.4% in the London trust) needed enhanced care. Patients who needed enhanced care differed from those who did not in age, cognitive test scores and Health of the Nation Outcome Scale scores. Logistic regression discriminated risk, with an area under the receiver operating characteristic curve (AUROC) of up to 0.78 after 1 year and 0.74 after 4 years. We were able to confirm the validity of the approach in two trusts that differed widely in the populations they serve.
Conclusions
It is possible to identify, at the time of diagnosis of dementia, individuals most likely to need enhanced care later in the disease course. This permits the development of targeted clinical interventions for this high-risk group.
Phase three trials of the monoclonal antibodies lecanemab and donanemab, which target brain amyloid, have reported statistically significant differences in clinical end-points in early Alzheimer's disease. These drugs are already in use in some countries and are going through the regulatory approval process for use in the UK. Concerns have been raised about the ability of healthcare systems, including those in the UK, to deliver these treatments, considering the resources required for their administration and monitoring.
Aims
To estimate the scale of real-world demand for monoclonal antibodies for Alzheimer's disease in the UK.
Method
We used anonymised patient record databases from two National Health Service trusts for the year 2019 to collect clinical, demographic, cognitive and neuroimaging data for these cohorts. Eligibility for treatment was assessed using the inclusion criteria from the clinical trials of donanemab and lecanemab, with consideration given to diagnosis, cognitive performance, cerebrovascular disease and willingness to receive treatment.
Results
We examined the records of 82 386 people referred to services covering around 2.2 million people. After applying the trial criteria, we estimate that a maximum of 906 people per year would start treatment with monoclonal antibodies in the two services, equating to 30 200 people if extrapolated nationally.
Conclusions
Monoclonal antibody treatments for Alzheimer's disease are likely to present a significant challenge for healthcare services to deliver in terms of the neuroimaging and treatment delivery. The data provided here allows health services to understand the potential demand and plan accordingly.
We compared the number of blood-culture events before and after the introduction of a blood-culture algorithm and provider feedback. Secondary objectives were the comparison of blood-culture positivity and negative safety signals before and after the intervention.
Design:
Prospective cohort design.
Setting:
Two surgical intensive care units (ICUs): general and trauma surgery and cardiothoracic surgery
Patients:
Patients aged ≥18 years and admitted to the ICU at the time of the blood-culture event.
Methods:
We used an interrupted time series to compare rates of blood-culture events (ie, blood-culture events per 1,000 patient days) before and after the algorithm implementation with weekly provider feedback.
Results:
The blood-culture event rate decreased from 100 to 55 blood-culture events per 1,000 patient days in the general surgery and trauma ICU (72% reduction; incidence rate ratio [IRR], 0.38; 95% confidence interval [CI], 0.32–0.46; P < .01) and from 102 to 77 blood-culture events per 1,000 patient days in the cardiothoracic surgery ICU (55% reduction; IRR, 0.45; 95% CI, 0.39–0.52; P < .01). We did not observe any differences in average monthly antibiotic days of therapy, mortality, or readmissions between the pre- and postintervention periods.
Conclusions:
We implemented a blood-culture algorithm with data feedback in 2 surgical ICUs, and we observed significant decreases in the rates of blood-culture events without an increase in negative safety signals, including ICU length of stay, mortality, antibiotic use, or readmissions.
Background: Blood cultures are commonly ordered for patients with low risk of bacteremia. Liberal blood-culture ordering increases the risk of false-positive results, which can lead to increased length of stay, excess antibiotics, and unnecessary diagnostic procedures. We implemented a blood-culture indication algorithm with data feedback and assessed the impact on ordering volume and percent positivity. Methods: We performed a prospective cohort study from February 2022 to November 2022 using historical controls from February 2020 to January 2022. We introduced the blood-culture algorithm (Fig. 1) in 2 adult surgical intensive care units (ICUs). Clinicians reviewed charts of eligible patients with blood cultures weekly to determine whether the blood-culture algorithm was followed. They provided feedback to the unit medical directors weekly. We defined a blood-culture event as ≥1 blood culture within 24 hours. We excluded patients aged <18 years, absolute neutrophil count <500, and heart and lung transplant recipients at the time of blood-culture review. Results: In total, 7,315 blood-culture events in the preintervention group and 2,506 blood-culture events in the postintervention group met eligibility criteria. The average monthly blood-culture rate decreased from 190 blood cultures per 1,000 patient days to 142 blood cultures per 1,000 patient days (P < .01) after the algorithm was implemented. (Fig. 2) The average monthly blood-culture positivity increased from 11.7% to 14.2% (P = .13). Average monthly days of antibiotic therapy (DOT) was lower in the postintervention period than in the preintervention period (2,200 vs 1,940; P < .01). (Fig. 3) The ICU length of stay did not change before the intervention compared to after the intervention: 10 days (IQR, 5–18) versus 10 days (IQR, 5–17; P = .63). The in-hospital mortality rate was lower during the postintervention period, but the difference was not statistically significant: 9.24% versus 8.34% (P = .17). The all-cause 30-day mortality was significantly lower during the intervention period: 11.9% versus 9.7% (P < .01). The unplanned 30-day readmission percentage was significantly lower during the intervention period (10.6% vs 7.6%; P < .01). Over the 9-month intervention, we reviewed 916 blood-culture events in 452 unique patients. Overall, 74.6% of blood cultures followed the algorithm. The most common reasons overall for ordering blood cultures were severe sepsis or septic shock (37%), isolated fever and/or leukocytosis (19%), and documenting clearance of bacteremia (15%) (Table 1). The most common indications for inappropriate blood cultures were isolated fever and/or leukocytosis (53%). Conclusions: We introduced a blood-culture algorithm with data feedback in 2 surgical ICUs and observed decreases in blood-culture volume without a negative impact on ICU LOS or mortality rate.
Background:Candida auris is a frequently drug-resistant yeast that can cause invasive disease and is easily transmitted in healthcare settings. Pediatric cases are rare in the United States, with <10 reported before 2022. In August 2021, the first C. auris case in Las Vegas was identified in an adult. By May 2022, 117 cases were identified across 16 healthcare facilities, including 3 pediatric cases at an acute-care hospital (ACH) with adult cases, representing the first pediatric cluster in the United States. The CDC and Nevada Division of Public and Behavioral Health (NVDPBH) sought to describe these cases and risk factors for C. auris acquisition. Methods: We defined a case as a patient’s first positive C. auris specimen. We reviewed medical records and infection prevention and control (IPC) practices. Environmental sampling was conducted on high-touch surfaces throughout affected adult and pediatric units. Isolate relatedness was assessed using whole-genome sequencing (WGS). Results: All 3 pediatric patients were born at the facility and had congenital heart defects. All were aged <6 months when they developed C. auris bloodstream infections; 2 developed C. auris endocarditis. One patient died. Patients overlapped in the pediatric cardiac intensive care unit; 2 did not leave between birth and C. auris infection. Mobile medical equipment was shared between adult and pediatric patients; lapses in cleaning and disinfection of shared mobile medical equipment and environmental surfaces were observed, presenting opportunities for transmission. Overall, 32 environmental samples were collected, and C. auris was isolated from 2 specimens from an adult unit without current cases. One was a composite sample from an adult patient’s bed handles, railings, tray table and call buttons, and the second was from an adult lift-assistance device. WGS of specimens from adult and pediatric cases and environmental isolates were in the same genetic cluster, with 2–10 single-nucleotide polymorphisms (SNPs) different, supporting within-hospital transmission. The pediatric cases varied by 0–3 SNPs; at least 2 were highly related. Conclusions:C. auris was likely introduced to the pediatric population from adults via inadequately cleaned and disinfected mobile medical equipment. We made recommendations to ensure adequate cleaning and disinfection and implement monitoring and audits. No pediatric cases have been identified since. This investigation demonstrates transmission can occur between unrelated units and populations and that robust infection prevention and control practices throughout the facility are critical for reducing C. auris environmental burden and limiting transmission, including to previously unaffected vulnerable populations, like children.
Dissolved inorganic carbon (DIC) in ocean water is a major sink of fossil fuel derived CO2. Carbon isotopes in DIC serve as tracers for oceanic water masses, biogeochemical processes, and air-sea gas exchange. We present a timeseries of surface DIC δ13C and Δ14C values from 2011 to 2022 from Newport Beach, California. This is a continuation of previous timeseries (Hinger et al. 2010; Santos et al. 2011) that together provide an 18-year record. These data show that DIC Δ14C values have declined by 42‰ and that DIC δ13C values have declined by 0.4‰ since 2004. By 2020, DIC Δ14C values were within analytical error of nearby clean atmospheric CO2 Δ14C values. These long-term trends are likely the result of significant fossil fuel derived CO2 in surface DIC from air-sea gas exchange. Seasonally, Δ14C values varied by 3.4‰ between 2011 and 2022, where seasonal δ13C values varied by 0.7‰. The seasonal variation in Δ14C values is likely driven by variations in upwelling, surface eddies, and mixed layer depth. The variation in δ13C values appears to be driven by isotopic fractionation from marine primary producers. The DIC δ13C and Δ14C values record the influence of the drought that began in 2012, and a major upwelling event in 2016.
The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery (WCPCCS) will be held in Washington DC, USA, from Saturday, 26 August, 2023 to Friday, 1 September, 2023, inclusive. The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery will be the largest and most comprehensive scientific meeting dedicated to paediatric and congenital cardiac care ever held. At the time of the writing of this manuscript, The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery has 5,037 registered attendees (and rising) from 117 countries, a truly diverse and international faculty of over 925 individuals from 89 countries, over 2,000 individual abstracts and poster presenters from 101 countries, and a Best Abstract Competition featuring 153 oral abstracts from 34 countries. For information about the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery, please visit the following website: [www.WCPCCS2023.org]. The purpose of this manuscript is to review the activities related to global health and advocacy that will occur at the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery.
Acknowledging the need for urgent change, we wanted to take the opportunity to bring a common voice to the global community and issue the Washington DC WCPCCS Call to Action on Addressing the Global Burden of Pediatric and Congenital Heart Diseases. A copy of this Washington DC WCPCCS Call to Action is provided in the Appendix of this manuscript. This Washington DC WCPCCS Call to Action is an initiative aimed at increasing awareness of the global burden, promoting the development of sustainable care systems, and improving access to high quality and equitable healthcare for children with heart disease as well as adults with congenital heart disease worldwide.
We assessed Oxivir Tb wipe disinfectant residue in a controlled laboratory setting to evaluate low environmental contamination of SARS-CoV-2. Frequency of viral RNA detection was not statistically different between intervention and control arms on day 3 (P=0.14). Environmental contamination viability is low; residual disinfectant did not significantly contribute to low contamination.
Clinical trial processes are unnecessarily inefficient and costly, slowing the translation of medical discoveries into treatments for people living with disease. To reduce redundancies and inefficiencies, a group of clinical trial experts developed a framework for clinical trial site readiness based on existing trial site qualifications from sponsors. The site readiness practices are encompassed within six domains: research team, infrastructure, study management, data collection and management, quality oversight, and ethics and safety. Implementation of this framework for clinical trial sites would reduce inefficiencies in trial conduct and help prepare new sites to enter the clinical trials enterprise, with the potential to improve the reach of clinical trials to underserved communities. Moreover, the framework holds benefits for trial sponsors, contract research organizations, trade associations, trial participants, and the public. For novice sites considering future trials, we provide a framework for site preparation and the engagement of stakeholders. For experienced sites, the framework can be used to assess current practices and inform and engage sponsors, staff, and participants. Details in the supplementary materials provide easy access to key regulatory documents and resources. Invited perspective articles provide greater depth from a systems, DEIA (diversity, equity, inclusion, and accessibility) and decentralized trials perspective.
Urine cultures collected from catheterized patients have a high likelihood of false-positive results due to colonization. We examined the impact of a clinical decision support (CDS) tool that includes catheter information on test utilization and patient-level outcomes.
Methods:
This before-and-after intervention study was conducted at 3 hospitals in North Carolina. In March 2021, a CDS tool was incorporated into urine-culture order entry in the electronic health record, providing education about indications for culture and suggesting catheter removal or exchange prior to specimen collection for catheters present >7 days. We used an interrupted time-series analysis with Poisson regression to evaluate the impact of CDS implementation on utilization of urinalyses and urine cultures, antibiotic use, and other outcomes during the pre- and postintervention periods.
Results:
The CDS tool was prompted in 38,361 instances of urine cultures ordered in all patients, including 2,133 catheterized patients during the postintervention study period. There was significant decrease in urine culture orders (1.4% decrease per month; P < .001) and antibiotic use for UTI indications (2.3% decrease per month; P = .006), but there was no significant decline in CAUTI rates in the postintervention period. Clinicians opted for urinary catheter removal in 183 (8.5%) instances. Evaluation of the safety reporting system revealed no apparent increase in safety events related to catheter removal or reinsertion.
Conclusion:
CDS tools can aid in optimizing urine culture collection practices and can serve as a reminder for removal or exchange of long-term indwelling urinary catheters at the time of urine-culture collection.
Until recently, the influence of basal liquid water on the evolution of buried glaciers in Mars' mid latitudes was assumed to be negligible because the latter stages of Mars' Amazonian period (3 Ga to present) have long been thought to have been similarly cold and dry to today. Recent identifications of several landforms interpreted as eskers associated with these young (100s Ma) glaciers calls this assumption into doubt. They indicate basal melting (at least locally and transiently) of their parent glaciers. Although rare, they demonstrate a more complex mid-to-late Amazonian environment than was previously understood. Here, we discuss several open questions posed by the existence of glacier-linked eskers on Mars, including on their global-scale abundance and distribution, the drivers and dynamics of melting and drainage, and the fate of meltwater upon reaching the ice margin. Such questions provide rich opportunities for collaboration between the Mars and Earth cryosphere research communities.
Many clinical trials leverage real-world data. Typically, these data are manually abstracted from electronic health records (EHRs) and entered into electronic case report forms (CRFs), a time and labor-intensive process that is also error-prone and may miss information. Automated transfer of data from EHRs to eCRFs has the potential to reduce data abstraction and entry burden as well as improve data quality and safety.
Methods:
We conducted a test of automated EHR-to-CRF data transfer for 40 participants in a clinical trial of hospitalized COVID-19 patients. We determined which coordinator-entered data could be automated from the EHR (coverage), and the frequency with which the values from the automated EHR feed and values entered by study personnel for the actual study matched exactly (concordance).
Results:
The automated EHR feed populated 10,081/11,952 (84%) coordinator-completed values. For fields where both the automation and study personnel provided data, the values matched exactly 89% of the time. Highest concordance was for daily lab results (94%), which also required the most personnel resources (30 minutes per participant). In a detailed analysis of 196 instances where personnel and automation entered values differed, both a study coordinator and a data analyst agreed that 152 (78%) instances were a result of data entry error.
Conclusions:
An automated EHR feed has the potential to significantly decrease study personnel effort while improving the accuracy of CRF data.