To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Aerosol-cloud interactions constitute the largest source of uncertainty in assessments of anthropogenic climate change. This uncertainty arises in part from the difficulty in measuring the vertical distributions of aerosols, and only sporadic vertically resolved observations are available. We often have to settle for less informative vertically aggregated proxies such as aerosol optical depth (AOD). In this work, we develop a framework for the vertical disaggregation of AOD into extinction profiles, that is, the measure of light extinction throughout an atmospheric column, using readily available vertically resolved meteorological predictors such as temperature, pressure, or relative humidity. Using Bayesian nonparametric modeling, we devise a simple Gaussian process prior over aerosol vertical profiles and update it with AOD observations to infer a distribution over vertical extinction profiles. To validate our approach, we use ECHAM-HAM aerosol-climate model data which offers self-consistent simulations of meteorological covariates, AOD, and extinction profiles. Our results show that, while very simple, our model is able to reconstruct realistic extinction profiles with well-calibrated uncertainty, outperforming by an order of magnitude the idealized baseline which is typically used in satellite AOD retrieval algorithms. In particular, the model demonstrates a faithful reconstruction of extinction patterns arising from aerosol water uptake in the boundary layer. Observations however suggest that other extinction patterns, due to aerosol mass concentration, particle size, and radiative properties, might be more challenging to capture and require additional vertically resolved predictors.
Background: All dental professionals face the risk of occupational percutaneous injuries and exposures. Previous studies have reported high incidents of percutaneous injuries among dentists. This study examined injury data over six years at a large teaching institution for trends to increase awareness and to design appropriate interventions to reduce injury rates. Method: Study injury data was collected for the department of employee and occupational health. The data was entered into an electronic incident reporting system from 2017-2023. Statistical analysis was performed with Openepi to determine injury trend by year and overall association by activity type. Result: There was a total of 168 injuries reported between 2017 and 2023. A majority of the injuries (54%) were caused by a needle or sutures followed by instruments at 41%. Most of the injuries (44%) occurred during treatment and while cleaning the surgical spaces at 15%. Only 13% of the injuries were attributed to handling or recapping needles. Chi-square test 0.2618 (p>.05) indicated there was no significant difference between years and number of injuries. Overall chi-square p ( < 0 .001) by activity type was significant indicating risk was not equal across all activities. Conclusion: Injuries declined during COVID-19 but soared back up in 2023. Needles, sutures, and instruments were the predominant source of injuries. Injuries occurred during treatment (43%), while cleaning the surgical space (15%) and while recapping or handling needles (13%). This study is the first step in understanding the trend and factors attributing to injuries to implement appropriate corrective actions. Further analysis should be conducted to identify specific procedures or clinical activities exposing employees to Occupational percutaneous injuries.
Background: National guidelines recommend penicillins (PCN) as first-line treatment for many common pediatric infections in the outpatient setting. Although less than 1% of the United States population has a true, IgE-mediated PCN-allergy, approximately 10% of patients are labeled with a PCN-allergy. Accurate adverse drug reaction (ADR) documentation plays an important role in this over-labeling. We have previously shown that nurses feel assessment and documentation of PCN-allergies are critical to their role. However, additional evidence purports nurse hesitancy to interrogate allergy accuracy or reclassify parent’s response to side effect. Our objective was to explore frontline clinicians’ confidence in assessing, documenting, and responding to PCN-allergy labels. Methods: To expose barriers and prioritize improvement ideas for a multidisciplinary quality improvement (QI) project aimed to improve PCN-allergy labeling in our pediatric urgent care clinics, we deployed this investigator-developed survey to prescribers and nurses. It’s comprised of 14-questions scored on a 5-point Likert scale (4 demographic, 4 PCN/safety, 3 allergy types, 4 allergy documentations, 3 treatment options), and 1 optional free-text. We used descriptive statistics to compare survey responses between prescribers and nurses and evaluated free text comments for themes. Results: Eighty-seven clinicians across 3 sites participated, with a response rate of 35%, with variation by sites (25.3% to 41.4%). Forty-one percent of (n=36) responders have been in practice >15 years and 40.2% (n=35) have worked at our hospital > 15 years (Table 1). Overall, perceived knowledge of PCN-allergies and safety was favorable (Table 2). Prescribers reported higher confidence with: 1) perceiving many patients who believe they are allergic to PCN can safety take PCN (prescribers median=5 [IQR: 4, 5] vs. nurses median=4 [4,4], p = 0.003); and 2) perceiving that time pressures influenced their ability to reconcile allergies and side effects (prescribers median=4 [4, 5] vs. nurses median=3 [2, 4], p = 0.001). Both prescribers and nurses reported lower confidence in continuing to administer or prescribe an antibiotic in the setting of a reported ADR. Thirteen respondents (15%) provided comments with specific requests for additional family education and practice guidance, including the referral process to subspecialty clinics for PCN-allergy testing. Conclusions: Our survey results identified barriers to accurate PCN-allergy labels, including knowledge on documentation, time pressures, hesitancy to challenge parent report, and uncertainty on referral process for PCN-allergy testing. This survey will inform future drivers for our QI. Opportunities include electronic medical record refinement, improving referrals to PCN-allergy de-labeling clinics, and the development of scripted education to guide family discussions.
Disclosure: Rana El Feghaly: Merck- grant funding. Amanda Nedved: Contracted Research – Merck
Background: Candida auris reporting and submission of confirmed or possible isolates has been mandatory in Minnesota since August 2019. On August 9, 2023, the Minnesota Department of Health (MDH) was notified of a C. auris isolate in hip tissue from a patient in acute care hospital A (ACH-A). Only 9 cases of C. auris were detected prior to August 2023, in Minnesota, and all from patients with a history of international healthcare or healthcare in endemic C. auris locations of the United States. Methods: The MDH Public Health Laboratory (MDH-PHL) confirmed identification of C. auris from the ACH-A isolate by MALDI-TOF. MDH partnered with ACH-A to review medical records, assess infection prevention and control (IPC) practices, conduct contact tracing, and identify patients for colonization screening. Screening was performed on all patients that overlapped with the index case (case A) and were admitted to a facility in the same healthcare system as ACH-A. Facilities accepting discharged patients who overlapped with case A were contacted for colonization screening. Overlapping patients, no longer admitted to a healthcare facility, were sent a notification letter, and offered outpatient screening. Composite axilla/groin swabs were screened for C. auris using real-time PCR at MDH-PHL, who also performed whole genome sequencing (WGS) and single nucleotide polymorphism (SNP) analysis. Results: Case A’s medical record showed only Minnesota healthcare exposures, a surgical procedure in June 2023 and indicated the case overlapped with a previous case (case B) from July 2023, who had recent international healthcare. The two cases were hospitalized at ACH-B July 12-18, on different care floors without evident links to shared services. However, the cases were in adjacent rooms in ACH-B Emergency Department (ED) on July 3 for 5 hours, when C. auris status of case B was unknown. WGS indicated both isolates were within clade I (South Asian) and separated by 2 SNPs, suggesting relatedness. Extensive colonization screening occurred among 109 potentially exposed patients, including 18 patients from the ED. No additional C. auris was detected. Conclusions: This case represents the first detected transmission of C. auris within a Minnesota healthcare facility. The role of C. auris transmission within the ED is not well understood. Medical record review in combination with WGS analysis suggests potential transmission within the ED. Clinicians should be aware of the risks for C. auris transmission in the ED and follow all IPC measures to prevent transmission of this emerging fungal pathogen.
Background: Most healthcare facilities in the US apply contact precautions (CP) for patients with methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant enterococci (VRE) infection and/or colonization. Most individuals with MRSA or VRE colonization will clear over time; however, frontline clinicians rarely evaluate for discontinuation of CP, resulting in increased burden on infection preventionists (IPs). Automation of time- and test-based evaluation using clinical decision support systems (CDSS) embedded in electronic health records (EHR) may increase evaluation and discontinuation of CP when appropriate, while preserving IP resources. Methods: This quality improvement initiative was implemented at Mass General Brigham (MGB), an integrated healthcare system, where patients with MRSA or VRE infection/colonization are identified in the EHR with a corresponding “infection status” and CP applied. Following MGB policy (Figure 1), CDSS features included: 1) automated time-based resolution from 2/15/2023-11/13/2023 and 2) automated ordering of screening assays for patients eligible for test-based evaluation from 6/20/2023-11/14/2023 (Figure 2). Counts of CP discontinuation and automated ordering were performed. IPs at one MGB facility performing manual review of patients self-recorded the time spent evaluating for CP discontinuation. Using these time reports, the average time to complete these tasks and the projected time savings were calculated over the implementation period. Results: Four IPs recorded the time to review patients for CP discontinuation, including reviewing recent antimicrobial administration, microbiology results, ordering screening test(s), and contacting the primary team. Twenty-five patient encounters were timed with a mean of 4.7 minutes documented per encounter. Over a 9-month period after initiation of the automated time-based resolution, the monthly mean number of patients with CP for MRSA and VRE which were automatically discontinued was 247 and 100, respectively. Projected IP time savings over the same 9-month period for MRSA and VRE were 174.1 and 70.5 hours, respectively. Over a 5-month period after initiation of automated ordering of MRSA polymerase chain reaction (PCR)/culture, as well as VRE culture for test-based evaluation, the monthly mean number of MRSA culture, MRSA PCR, and VRE culture automatically ordered for patients on CP for MRSA and VRE were 176, 24, and 145, respectively. Projected IP time savings over the same 5-month period for MRSA and VRE were 78.3 and 56.8 hours, respectively.
Conclusion: Healthcare systems that enhance their EHR with CDSS to automate CP evaluations may improve frontline clinician workflow, patient flow and bed capacity, while optimizing use of IP resources.
This paper focuses on the experiences of bereavement guilt among young adults bereaved by a caregiver’s cancer, examining associations with attachment style, experiential avoidance, and psychological flexibility with the aim of informing psychosocial interventions for this population.
Methods
Ninety-seven young adults (18–25 years) bereaved by a parent/guardian’s cancer completed an online survey, including measures of bereavement guilt, attachment style, experiential avoidance, and psychological flexibility. Mediation analyses explored the associations between attachment style (anxious, avoidant) and bereavement guilt, and if these associations were mediated by experiential avoidance or psychological flexibility.
Results
Bereavement guilt was significantly positively associated with anxious, but not avoidant, attachment to the deceased; the relationship between anxious attachment and bereavement guilt was partially mediated by experiential avoidance. Bereavement guilt was also negatively associated with psychological flexibility and engagement with bereavement counseling.
Significance of results
Given the limited literature on cancer-related bereavement in young adulthood, this study offers important theoretical and clinical insights into factors associated with more complex aspects of grief in this population. Specifically, this work identified that anxious attachment is associated with ongoing bereavement complications in the years following the death of a caregiver to cancer, with experiential avoidance partially mediating this relationship. While further research is needed to better understand the interaction between these factors and other related constructs, such as psychological flexibility, these findings may be helpful in selecting therapeutic approaches to use with this population.
The fluvial capture of endorheic basins represents a milestone in basin chronology, implying a profound disequilibrium that triggers critical geomorphological, sedimentological, paleogeographic, and even paleoecological transformations. The primary goal of many geomorphological studies is to determine the timing of endorheic-to-exorheic transitions with the objective of unveiling the dynamics that follow the capture event. The age of the Guadix-Baza Basin capture in the Central Betic Cordillera (S Spain) remains a subject of controversy, with proposed estimates ranging from 17 to 600 ka. In this study, we present new 234U/230Th and optically stimulated luminescence ages from exorheic deposits exposed within the basin's main fluvial valley, the Guadiana Menor River. We acquired the oldest numerical age recorded to date for a postcapture deposit within the basin. This age corresponds to a travertine platform formed 240.8 ± 25 ka on a surface level that was already incised into the glacis surface at approximately 250 m. Using these data, we estimate that basin capture took place earlier than ca. 240 ka, plus the time required for the river to incise 250 m to the position of the travertine. Furthermore, the proximity of the Matuyama-Brunhes reversal (781 ka) to the top of the endorheic succession and the ages of the paleontological sites (> ca. 750 ka) throughout the basin suggest that the capture could have occurred earlier than the oldest previously proposed age of 600 ka.
Free-ranging native Dartmoor and Exmoor ponies have not only held strong cultural and environmental significance for thousands of years within their respective national parks, but their environmental benefits and naturally selected characteristics have also been acknowledged and harnessed for conservation grazing and rewilding programmes. Despite a wealth of literature regarding the welfare of sports, leisure and working horses, there is little information concerning the welfare of free-ranging and extensively grazing ponies. The present study compared the welfare of native Exmoor and Dartmoor ponies grazing on the moors in their respective national parks (n = 47) with those that have been translocated to other areas of the UK for use in conservation grazing and rewilding programmes (n = 29) using a specifically designed observational welfare assessment protocol for free-ranging ponies. The results showed a significant difference between common land and conservation grazing ponies in the scores for Body Condition Score, Water Quality and Availability, Environmental Hazards, Human Disturbance, Skin and Coat Condition and the Human Approach Test. Despite no evidence of significant welfare compromise being identified, this study emphasises the importance of year-round monitoring of welfare and the feasibility of the observational welfare protocol to be used by pony keepers and grazing managers in the future.
Background: Group B Streptococcus (GBS) is one of the most common causes of bacterial sepsis in newborns. In 2002, the Center for Disease Control and Prevention (CDC) recommended universal screening of all pregnant women for GBS colonization and administering intrapartum prophylaxis to colonized pregnant women to prevent GBS infection in newborns. To identify racial disparities in GBS infections in Tennessee, we compared the incidence of early-onset GBS infection among Black and White infants from 2005-2021. Methods: GBS infections identified from normally sterile sites are reportable in Tennessee. We analyzed GBS data reported to surveillance systems from 2005 to 2021. We linked the surveillance data with the population data to calculate incidence rates. We excluded cases with unknown race status (9%) and other races (0.2%) as we do not have denominator data to calculate the incidence rate. Database linkage and data analyses were performed in SAS V.9.4. Results: A total of 399 early-onset GBS cases were reported from 2005–2021; 150 (37.59%) were Black, 212 (53.13%) were White, and 36 (9.02%) were of unknown race, and one (0.20%) reported as Other for race. While the incidence rates of early-onset GBS for all races declined from 0.23 per 1000 live births in 2005 to 0.18 per 1000 live births in 2021, Blacks experienced the largest decline in incidence from 0.6 per 1000 live births in 2005 to 0.37 in 2021. Among Whites, there was a slight decline in 2021 (0.13/1000 live births) compared to the rate in 2005 (0.21/1000 live births). The mean incidence rate of early onset GBS among Blacks (0.52 per 1000 live births) is significantly higher than the mean rates among Whites (0.20 per 1000 live births) (p value < 0 .001) from 2005 to 2021. Shelby County, one of the 95 counties in Tennessee, is predominantly Black (54.6%) and reported 27.8% of all early-onset GBS. Conclusion: There was a significant decline in early-onset GBS infections among Blacks and some reductions among Whites, indicating the effectiveness of the prevention strategies. However, Blacks have significantly higher rates than their White counterparts. In addition, 27.8% of the cases are reported from one county, signaling geographic disparities as well. Further investigation is warranted to identify risk factors and causes of observed racial and geographic disparities to help reduce the infection rate among vulnerable populations and high-risk geographic areas.
The objective of this study was to identify factors more commonly observed on farms with poor livestock welfare compared to farms with good welfare. Potentially, these factors may be used to develop an animal welfare risk assessment tool (AWRAT) that could be used to identify livestock at risk of poor welfare. Identifying livestock at risk of poor welfare would facilitate early intervention and improve strategies to promptly resolve welfare issues. This study focuses on cattle, sheep and goats in non-dairy extensive farming systems in Australia. To assist with identifying potential risk factors, a survey was developed presenting 99 factors about the farm, farmers, animals and various aspects of management. Based on their experience, key stakeholders, including veterinarians, stock agents, consultants, extension and animal welfare officers were asked to consider a farm where the welfare of the livestock was either high or low and rate the likelihood of observing these factors. Of the 141 responses, 65% were for farms with low welfare. Only 6% of factors had ratings that were not significantly different between high and low welfare surveys, and these were not considered further. Factors from poor welfare surveys with median ratings in the lowest 25% were considered potential risks (n = 49). Considering correlation, ease of verification and the different livestock farming systems in Australia, 18 risk factors relating to farm infrastructure, nutrition, treatment and husbandry were selected. The AWRAT requires validation in future studies.
Background: The 2022 Special Report: COVID-19 U.S. Impact on Antimicrobial Resistance identified continued increases in the rate of extended- spectrum beta-lactamase producing (ESBL) infections in the United States from 2017 through 2020. Using similar data sources and methodology, we examined the trends of species-specific ESBL infections from 2012-2021. Methods: We identified a cohort of patients from the PINC AI and BD Research Insights databases with a clinical culture yielding a Klebsiella pneumoniae or Escherichia coli isolate with accompanying susceptibility testing. E. coli or K. pneumoniae isolates non-susceptible to ceftriaxone, cefotaxime, ceftazidime, or cefepime were considered suggestive of ESBL production. Isolates from patients with no culture yielding the same resistance phenotype of interest in the previous 14 days were counted as an incident case. Community-onset (CO) cultures were obtained ≤ day 3 of hospitalization; hospital-onset (HO) cultures were obtained ≥ day 4. We used a raking procedure to determine weights for extrapolating the number of discharges included in our sample to match the distribution of discharges, stratified by bed size, U.S. census division, urban/rural designation, and teaching status, for U.S. hospitals included in the American Hospital Association survey. We evaluated rates over time due to the changes in number of hospitalizations during the COVID-19 pandemic. Results were stratified by HO and CO, and sterile and non-sterile specimen sources. Results: In 2021, there were 48,936 ESBL K. pneumoniae and 153,112 ESBL E. coli infections among approximately 32 million discharges. Overall, most infections were CO and from non-sterile specimens. From 2012-2021, the rate of ESBL K. pneumoniae increased from 9.54 to 15.28 per 10,000 discharges. ESBL E. coli infections increased from 2012-2020 (30.18 to 51.32 per 10,000 discharges), then declined in 2021 (47.81 per 10,000 discharges) (Table 1, Figure 1). The proportion of non-sterile ESBL E. coli declined from 88% in 2012 to 83% in 2021, and the proportion of non-sterile ESBL K. pneumoniae was 85-87% over the study period (Figure 2). Conclusion: ESBL E. coli and K. pneumoniae infections increased from 2012-2021, although the CO ESBL E. coli rate decreased between 2020 and 2021. Understanding changes in culturing practices over time may provide insights into the increased proportion of ESBL E. coli from sterile sites. Additionally, further investigation into differences in organism trends, particularly in 2021, may inform prevention strategies.
Space-borne passive microwave (PMW) data provide rich information on atmospheric state, including cloud structure and underlying surface properties. However, PMW data are sparse and limited due to low Earth orbit collection, resulting in coarse Earth system sampling. This study demonstrates that Bayesian deep learning (BDL) is a promising technique for predicting synthetic microwave (MW) data and its uncertainties from more ubiquitously available geostationary infrared observations. Our BDL models decompose predicted uncertainty into aleatoric (irreducible) and epistemic (reducible) components, providing insights into uncertainty origin and guiding model improvement. Low and high aleatoric uncertainty values are characteristic of clear sky and cloudy regions, respectively, suggesting that expanding the input feature vector to allow richer information content could improve model performance. The initially high average epistemic uncertainty metrics quantified by most models indicate that the training process would benefit from a greater data volume, leading to improved performance at most studied MW frequencies. Using quantified epistemic uncertainty to select the most useful additional training data (a training dataset size increase of 3.6%), the study reduced the mean absolute error and root mean squared error by 1.74% and 1.38%, respectively. The broader impact of this study is the demonstration of how predicted epistemic uncertainty can be used to select targeted training data. This allows for the curation of smaller, more optimized training datasets and also allows for future active learning studies.
Background: Interventions targeting urine culture stewardship can improve diagnostic accuracy for urinary tract infections (UTI) and decrease inappropriate antibiotic treatment of asymptomatic bacteriuria. We aimed to determine if a clinical decision support (CDS) tool which provided guidance on and required documentation of the indications would decrease inappropriately ordered urine cultures in an academic healthcare network that already uses conditional (e.g. reflex) urine testing. Methods: In October 2022, four hospitals within one academic healthcare network transitioned to a new electronic health record (EHR). We developed an embedded CDS tool that provided guidance on ordering either a urinalysis (UA) with reflex to urine culture or a non-reflex urine culture (e.g. for pregnant patients) based on the indication for testing (Figure 1). We compared median monthly UA with reflex culture and non-reflex urine culture order rates pre- (8/2017–9/2022) and post- (10/2022–9/2023) intervention using the Wilcoxon rank-sum test. We used interrupted time-series analyses allowing a one-month time window for the intervention effect to assess changes in monthly UA with reflex culture, non-reflex urine culture, and total urine culture order rates associated with the intervention. Using SAS 9.4, we generated Durbin-Watson statistics to assess for autocorrelation and adjusted for this using a stepwise autoregressive model. Result: The median monthly UA with reflex culture order rates per 1000 patient-days were similar pre- and post- intervention at 36.7 (interquartile range [IQR]: 31.0–39.7) and 35.4 (IQR: 32.8–37.0), respectively (Figure 2). Non-reflex and total urine culture rates per 1000 patient-days decreased from 8.5 (IQR: 8.1–9.1) to 4.9 (IQR: 4.7–5.1) and from 20.0 (IQR: 18.9–20.7) to 14.4 (IQR: 14.0–14.6) post-intervention, respectively. Interrupted time-series analyses revealed that the intervention was associated with a decrease in the monthly non-reflex urine culture by 4.8 cultures/1000 patient-days (p< 0.001) and in the total urine culture monthly order rates by 5.0 cultures/ 1000 patient-days (p < 0 .001) [Figures 3a and b]. The UA with reflex order rate did not significantly change with the intervention (not pictured). Conclusion: In an academic healthcare network that already employed conditional urine testing, the implementation of an EHR-based diagnostic stewardship tool led to additional decreases in both non-reflex and total urine cultures ordered.
Scientists have the epistemic responsibility of producing knowledge. They also have the social responsibility of aligning their research with the needs and values of various societal stakeholders. Individual scientists may be left with no guidance on how to prioritize and carry these different responsibilities. As I will argue, however, the responsibilities of science can be harmonized at the collective level. Drawing from debates in moral philosophy, I will propose a theory of the collective responsibilities of science that accounts for the internal diversity of research groups and for their different responsibilities.
Background: In 2023, Nebraska held its 4th state antimicrobial stewardship (AS) educational conference, an annual one-day in-person event with continuing education offered for nurses, pharmacists, microbiology lab technicians, and physicians. One challenge of educational events is determining if content has been translated into practice. We sought to assess AS-related practice changes implemented by conference attendees. Methods: Conference attendees were sent 2 surveys by email following the conference. Survey 1 questions were integrated into the continuing education credit evaluation immediately following the conference. Survey 2 was sent three months later to all registered attendees. Qualitative responses were grouped by theme and descriptive statistics were used to evaluate Results: There were 203 attendees from across the state including a diverse group of learners (Table 1) representing metropolitan and rural areas of Nebraska (Figure 1) from acute care hospitals, critical access hospitals, long-term care settings, and public health. A total of 148 attendees (73%) answered questions in Survey 1 (Table 2), and 79 (39%) attendees responded to Survey 2. On Survey 1, 94% of respondents indicated that they intended to make practice changes, though 60% anticipated barriers including further staff training needs and lack of resources and health system support. On Survey 2, 83% of respondents indicated successful implementation of practice changes at three months after the conference. The most common practice changes included enhanced communication strategies, improved antibiotic tracking, monitoring, and review, policy and procedure updates, and AS tool implementation. On Survey 1, 26% (35/131) strongly agreed that their ability to treat patients was adequate prior to the conference; this increased to 55% (72/131) post-conference. On Survey 2, 56% (22/39) of respondents reported improvement in patient outcomes because of implemented practice changes following conference attendance. However, some also mentioned a short follow-up survey timeline as a limitation in assessing patient outcome improvements. Reported outcomes included improved receptiveness from providers, patients, and families to antibiotic use recommendations, shorter prescribed durations, and more appropriate initial antibiotic selection. Improved team performance was noted by 73% (27/37) of respondents. Themes included improved communication with internal and external stakeholders, more collaborative team discussions, increased confidence in recommendations, expanded provider and staff engagement, and increased leadership involvement. Conclusions: In addition to improved knowledge and understanding for a variety of AS-related areas, attendees of the conference also reported a high rate of practice changes that led to perceived improvements in patient outcomes and team function.
The present study offers an epistemological and ontological historiographical review of the concept of the unit of analysis using island archaeology as a case study. We carry out a critical investigation to lay out the main ideas used to define units of analysis, and we consider the discourse that has emerged between this and other fields when defining such a concept. From an epistemological point of view, we can define three distinct strategies: first, those that define units of analysis by their outer limits, their borders; secondly, those that make the definition based on the internal dynamics taking place within the units of study; and in third place, strategies that focus on defining the analytical unit as a set of interactions between agents. From a more ontological point of view, we can differentiate between strategies that take on a categorical perspective and those that take on a more relational perspective. Ultimately, we reflect on the conceptualization and function of the unit of analysis in the process of interpretation, and in so doing, we provide evidence of the great theoretical richness of the concept and the multiple interrelated factors involved in its development.
Background: Artificial intelligence (AI) tools have demonstrated success in US medical licensing examinations; however, their utility in infection prevention and control (IPC) remains unknown. Methods: The program of hospital epidemiology handles consultation calls and records each question and answer. Using 2022 data, we selected 31 frequently asked questions. We utilized four AI tools, including Chat GPT-3.5 and 4.0, Bing AI, and OpenEvidence, to generate answers. We predefined scales (Table 1) to capture responses by three reviewers, including two hospital epidemiologists and one infection preventionist. The mean score of ≥ 3 and ≥ 4 was considered acceptable in accuracy and completeness, respectively. We reported the percentage of responses with acceptable accuracy and completeness out of assessed questions for each category. Results: Among 31 questions, 16 were associated with isolation duration, 9 with healthcare personnel (HCP) exposure, 4 with cleaning contaminated rooms, and 2 with patient exposure. Regarding accuracy, most AI tools performed worse in questions about isolation duration, ranging between 75% and 93.8%. All AI tools, except OpenEvidence, had a 100% accuracy rate for HCP and patient exposure. All AI tools had a 100% accuracy rate for contaminated room handling. The highest overall acceptable accuracy rate was observed in Chat GPT-3.5. Regarding completeness, most AI tools performed worse in questions about isolation duration, ranging between 44% and 75%. All AI tools, except OpenEvidence, had a 100% completeness rate for contaminated rooms and patient exposure. The highest overall acceptable completeness rate was observed in Bing AI (Table 2). Conclusions: All AI tools provided reasonable answers to commonly asked IPC-related questions, although, there were variations among different tools used. AI could be used to supplement the infection control program, especially if resources are limited.
India has made significant progress in improving the enrolment of students with disability but still has a long way to go before schools can be called inclusive. Despite the widely acknowledged relevance of assessments in shaping teaching and learning practices, little research has been done in disability-inclusive assessment in the Indian setting. In this paper, we explore teachers’ perceptions of disability inclusion in formative assessments, including the use of various kinds of accommodations and adaptations, factors that affect the implementation of disability-inclusive formative assessments, and challenges. It is argued that teacher professional development and teacher–parent partnerships are essential for ensuring the inclusion of students with disability in formative assessments. Unless assessment is given its due importance in disability-inclusive education, achievement gaps between children with and without disability may widen due to the unavailability of learning data and its use.
Background: Inappropriate antibiotic use impacts patient safety and antimicrobial resistance patterns. In 2013, general dentists in the U.S. prescribed nearly 10% of all outpatient oral antibiotics (24.5 million prescriptions). The American Dental Association (ADA) published guidelines in 2019 recommending limited antibiotic prescribing for the treatment of dental pain and swelling. We characterized dental prescribing during 2018–2022 to assess whether antibiotic use decreased after the guideline’s release. In addition, we examined access to dental care. Methods: All antibiotic prescriptions dispensed during 2018–2022 were extracted from the IQVIA Xponent database, which captured ≥92% of all U.S. outpatient prescriptions and projected to 100% coverage. Prescriptions by general dentists were compared to total outpatient oral antibiotic prescriptions and summarized by patient sex, patient age, and prescriber geographic region. Census denominators were used to calculate prescribing rates per 1,000 persons. IQVIA general dentist counts were used to calculate dentists per 100,000 persons. Results: General dentists prescribed 24.7 million antibiotic prescriptions in 2018 (75 prescriptions per 1,000 persons) compared with 25.2 million (76 prescriptions per 1,000 persons) in 2022. During 2020–2022, general dentists prescribed >10% of all outpatient antibiotic prescriptions (range 10.7%–12.1%). In each year, prescription rates were higher for females, patients > 65 years, and among prescribers in the Northeast. In 2022, there were 58 general dentists per 100,000 persons in the United States. The highest general dentist rate was in District of Columbia (100 per 100,000 persons) and the lowest rate was in Delaware (41 per 100,000 persons). Conclusions: Despite the ADA’s 2019 guidelines, prescribing by general dentists remained stable during 2018–2022. Because the total number of antibiotic prescriptions overall decreased, general dentists’ share of all outpatient antibiotic prescriptions increased to >10% in recent years. Rate variation by patient characteristics and prescriber region may reflect differences in dental disease burden or may represent unnecessary antibiotic use. Dental antibiotic stewardship is needed, including dissemination and implementation of current prescribing guidelines. Further evaluation of prescribing indications and access to dental care is needed to inform dental stewardship priorities.
Background: While Outpatient Parenteral Antibiotic Therapy (OPAT) offers patient convenience and reduced healthcare costs, its increasing utilization has brought various complications to light, including antibiotics-related and line-related OPAT complications. In a large prospective study, 18% of the patients experienced adverse drug events. Another study showed 8.45% of patients had vascular complications. Our study aims to identify clinical predictors associated with OPAT complications. Identifying predictors for suboptimal OPAT outcomes provides an opportunity to intervene, thereby minimizing the risk of OPAT-related complications. Method: We conducted a retrospective cohort study at Tufts Medical Center of all adult patients aged ≥18 years discharged on OPAT from April 2022 to October 2022. Demographic, treatment, outcome, and complications data were extracted through chart review. The primary outcome was the proportion and predictors of OPAT complications. The secondary outcomes were OPAT completion rate, 30-day ED visit, and 30-day readmission rates related to OPAT complications. We used univariable and multivariable analyses using logistic regression models for the predictors of OPAT complications. Variables with p5 (OR, 0.281, 95% CI 0.101–0.784), but they were more likely to have received two antibiotics (OR, 2.265; 95% CI 1.155-4.442). However, no significant independent predictor OPAT complications was identified in multivariable regression analysis (Figure 2). OPAT completion rates were lower in patients with complications (59.1% versus 75.4%). The 30-day ED visit and 30-day readmission rates were significantly higher in the complication group (31.8% vs. 0 and 34.1% vs. 2.1%, respectively). Conclusion: Our study highlights the significant difference in treatment completion rates and higher incidence of ED visits and readmissions rates among those with OPAT complications. Although specific independent predictor was not identified, the association with multiple antibiotic therapies and telemedicine follow-ups suggests areas for further investigation.