We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Longstanding design and reproducibility challenges in inertial confinement fusion (ICF) capsule implosion experiments involve recognizing the need for appropriately characterized and modeled three-dimensional initial conditions and high-fidelity simulation capabilities to predict transitional flow approaching turbulence, material mixing characteristics, and late-time quantities of interest – for example, fusion yield. We build on previous coarse-graining (CG) simulations of the indirect-drive national ignition facility (NIF) cryogenic capsule N170601 experiment – a precursor of N221205 which resulted in net energy gain. We apply effectively combined initialization aspects and multiphysics coupling in conjunction with newly available hydrodynamics simulation methods, including directional unsplit algorithms and low Mach-number correction – key advances enabling high fidelity coarse-grained simulations of radiation-hydrodynamics driven transition.
To date, a growing body of literature has documented the existence and impacts of coherent structures known as large- and very-large-scale motions within wall-bounded turbulent flows under neutral and unstable thermal stratification. These coherent structures can account for a considerable fraction of the overall turbulent transport and have been found to modulate small-scale turbulent fluctuations near the wall. In the context of stably stratified flows, however, the examination of such coherent structures has garnered relatively little attention. Stable stratification limits vertical transport and turbulent mixing within flows, which makes it unclear the extent to which previous findings on coherent structures under unstable and neutral stratification are applicable to stably stratified flows. In this study, we investigate the existence and characteristics of coherent structures under stable stratification with a wide range of statistical and spectral analyses. Outer peaks in premultiplied spectrograms under weak stability indicate the presence of large-scale motions, but these peaks become weaker and eventually vanish with increasing stability. Quadrant analysis of turbulent transport efficiencies (the ratio of net fluxes to their respective downgradient components) demonstrates dependencies on both stability and height above ground, which is evidence of morphological differences in the coherent structures under increasing stability. Amplitude modulation by large-scale streamwise velocity was found to decrease with increasing gradient Richardson number, whereas modulation by large-scale vertical velocity was approximately zero across all stability ranges. For sufficiently stable stratification, large eddies are suppressed enough to limit any inner–outer scale interactions.
Contemporary understanding of the mechanisms of disease increasingly points to examples of “genetic diseases” with an infectious component and of “infectious diseases” with a genetic component. Such blurred boundaries generate ethical, legal, and social issues and highlight historical contexts that must be examined when incorporating host genomic information into the prevention, outbreak control, and treatment of infectious diseases.
Oral corticosteroids are used to treat exacerbations of chronic rhinosinusitis with nasal polyps. Oral corticosteroid prescribing practices vary as reported from national surveys in Italy, China, Canada and the USA.
Methods
A nationwide online survey of ENT doctors practicing in Scotland was conducted using Microsoft Forms.
Results
There was a 31 per cent response rate. The most common daily doses of oral corticosteroid courses were 25 mg and 40 mg with the lengths being 14 and 7 days, respectively. Seventy-seven per cent of respondents prescribed the same daily dose throughout the course. Rhinologists prescribed longer courses with a smaller daily dose of prednisolone. Only one respondent fully agreed that there were clear guidelines regarding the daily dose and the length of oral corticosteroid course in the treatment of chronic rhinosinusitis with nasal polyps.
Conclusion
The heterogeneity of oral corticosteroid prescribing practice in different countries, including Scotland, reveals the need for clear guidelines with a specific oral corticosteroid daily dose and length of the course.
Canonical work argues that macropartisanship—the aggregate distribution of Democrats and Republicans in the country at a given time—is responsive to the economic and political environment. In other words, if times are good when Democrats are in charge (or bad when Republicans are in charge), more Americans will identify with the Democratic Party. We extend the pioneering work of MacKuen, Erikson, and Stimson (1989), who analyzed macropartisanship from 1953 through 1987, to 2021, assessing whether consumer sentiment and presidential approval still influence macropartisanship in an era of nationalized elections and affective polarization. We find that change has occurred. The effect of consumer sentiment on macropartisanship is no longer statistically distinguishable from zero, and we find evidence of “structural breaks” in the macropartisanship time series. Macropartisanship appears to have become less responsive to economic swings; approval-induced changes in macropartisanship have become more fleeting over time.
Startup companies in the healthcare sector often fail because they lack sufficient entrepreneurial, regulatory, and business development expertise. Maturity models provide useful frameworks to assess the state of business elements more systematically than heuristic assessments. However, previous models were developed primarily to characterize the business state of larger nonmedical companies. A maturity index designed specifically for startup companies in the medical product sector could help to identify areas in which targeted interventions could assist business development.
Methods:
A novel MedTech Startup Maturity Index (SMI) was developed by a collaborative team of academic and industry experts and refined through feedback from external stakeholders. Pediatric medical device startups associated with the West Coast Consortium for Technology & Innovation in Pediatrics (CTIP) were scored and ranked according to the SMI following semi-structured interviews. The CTIP executive team independently ranked the maturity of each company based on their extensive experiences with the same companies.
Results:
SMI scores for 16 companies ranged from 1.2 to 3.8 out of 4. These scores were well aligned with heuristic CTIP rankings for 14 out of 16 companies, reflected by strong correlations between the two datasets (Spearman’s rho = 0.721, P = 0.002, and Kendall’s tau-b = 0.526, P = 0.006).
Conclusions:
The SMI yields maturity scores that correlate well with expert rankings but can be assessed without prior company knowledge and can identify specific areas of concern more systematically. Further research is required to generalize and validate the SMI as a pre-/post-evaluation tool.
The five times sit-to-stand test (FTSS) is an established functional test, used clinically as a measure of lower-limb strength, endurance and falls risk. We report a novel method to estimate and classify cognitive function, balance impairment and falls risk using the FTSS and body-worn inertial sensors. 168 community dwelling older adults received a Comprehensive Geriatric Assessment which included the Mini-Mental State Examination (MMSE) and the Berg Balance Scale (BBS). Each participant performed an FTSS, with inertial sensors on the thigh and torso, either at home or in the clinical environment. Adaptive peak detection was used to identify phases of each FTSS from torso or thigh-mounted inertial sensors. Features were then extracted from each sensor to quantify the timing, postural sway and variability of each FTSS. The relationship between each feature and MMSE and BBS was examined using Spearman’s correlation. Intraclass correlation coefficients were used to examine the intra-session reliability of each feature. A Poisson regression model with an elastic net model selection procedure was used to estimate MMSE and BBS scores, while logistic regression and sequential forward feature selection was used to classify participants according to falls risk, cognitive decline and balance impairment. BBS and MMSE were estimated using cross-validation with low root mean squared errors of 2.91 and 1.50, respectively, while the cross-validated classification accuracies for balance impairment, cognitive decline, and falls risk were 81.96, 72.71, and 68.74%, respectively. The novel methods reported provide surrogate measures which may have utility in remote assessment of physical and cognitive function.
The coronavirus disease 2019 (COVID-19) pandemic is having a well-documented impact on the mental health of front-line health and social care workers (HSCWs). However, little attention has been paid to the experiences of, and impact on, the mental health professionals who were rapidly tasked with supporting them.
Aims
We set out to redress this gap by qualitatively exploring UK mental health professionals’ experiences, views and needs while working to support the well-being of front-line HSCWs during the COVID-19 pandemic.
Method
Mental health professionals working in roles supporting front-line HSCWs were recruited purposively and interviewed remotely. Transcripts of the interviews were analysed by the research team following the principles of reflexive thematic analysis.
Results
We completed interviews with 28 mental health professionals from varied professional backgrounds, career stages and settings across the UK. Mental health professionals were motivated and driven to develop new clinical pathways to support HSCWs they perceived as colleagues and many experienced professional growth. However, this also came at some costs, as they took on additional responsibilities and increased workloads, were anxious and uncertain about how best to support this workforce and tended to neglect their own health and well-being. Many were professionally isolated and were affected vicariously by the traumas and moral injuries that healthcare workers talked about in sessions.
Conclusions
This research highlights the urgent need to consider the mental well-being, training and support of mental health professionals who are supporting front-line workers.
The MEDDINI intervention study investigated how advice improved the adoption of a Mediterranean diet (MD) in cardiovascular disease patients. Earlier research profiled the levels of blood metabolites in MEDDINI participants, in the process discovering a number dietary biomarkers indicative of a MD. However, a potential limitation of this approach is that MD scores are semi-quantitative, and don't reflect the absolute amounts of food consumed. Therefore, the present study identified distinct dietary patterns based on quantified food diary data from 58 MEDDINI participants by applying k-means clustering analysis. Previously measured blood metabolites (90) using targeted and untargeted methods were then assessed for their performance as dietary biomarkers. After careful standardisation (z-scores), optimisation and cross-validation dietary data were reduced to 6 specific food groups and this led to the formation of two clusters. Cluster 1 included participants who had the lowest intakes of fruit and vegetables, legumes, fish and whole grain cereals and the highest intake of meat and sweet foods (including carbonated drinks). Cluster 2 comprised the participants with highest intake of fruit and vegetables, legumes, fish and whole grain cereals and the lowest intake of meat and sweet foods (including carbonated drinks). Discriminatory metabolites (p derived from untargeted analysis included Citric acid, Tyrosine, Malonate, Pyroglutamic acid, Succinate, Betaine, L-asparagine and Fumaric acid which were significantly increased in cluster 2, and 2-Hydroxybutyric acid and Pyruvic acid which were significantly decreased in cluster 2. Targeted biomarker analysis showed 8 discriminatory metabolites which were significantly (p increased in cluster 2. These were Docosahexaenoic acid (DHA), alpha-Carotene, beta-Carotene, beta-Cryptoxanthin, Vitamin C, Lutein, alpha-Linolenic acid and Lycopene. Conversely Osbond acid, Cholesterol and Dihomo-γ-linolenic acid (DGLA) were significantly lower in cluster 2. Metabolites significantly correlated with some of the 6 groups in the clusters. For example, Citric acid, Betaine and Vitamin C positively correlated with combined fruit, fruit juice and vegetable intake: (r = 0.20, p = 0.018; r = 0.20, p = 0.02 and r = 0.34, p = 5.7E-5 respectively). DHA, alpha-Carotenoid and beta-Carotenoid significantly correlated with fish intake (r = 0.58, p = 1.94E-13; r = 0.40, p = 2E-6 and r = 0.30, p = 3.5E-4 respectively). The present study demonstrates the utility of clustering analysis for effectively assessing adherence to healthy dietary patterns and the discovery of novel dietary biomarkers.
A longstanding issue in the field of nutrition is the potential inaccuracy of methods traditionally used for dietary assessment (i.e. food diaries and food frequency questionnaires). It is possible to overcome the limitations and biases of these techniques by combining them with analytical measurements in human biofluids. Metabolomic technologies are gaining popularity as nutritional tools due to their capacity to measure metabolic responses to external stimuli, such as the ingestion of certain foods. This project performed both LC-MS and 1H-NMR metabolomic profiling on serum samples collected as part of the NICOLA study (Northern Irish Cohort for the Longitudinal Study of Aging) in order to discover novel dietary biomarkers. A dietary validation cohort (NIDAS) was incorporated within NICOLA, involving 45 males and 50 females, aged 50 years and over. Participants provided detailed dietary data (4-day food diary) and blood samples at two time-points, six months apart. Serum samples were processed on two analytical platforms. 1H-NMR spectra were acquired using a Bruker 600 MHz Ascent coupled to a TCI cryoprobe and processed using Bayesil (University of Alberta, Canada). A Waters TQ-S coupled with an Acquity I-class UPLC was used in combination with a targeted commercially available kit (AbsoluteIDQ p180 kit, Biocrates). Mass spectra obtained were processed with MetIDQ and verified using MassLynx (v4.1). Data were tested for normality, and metabolite concentrations were correlated with recorded dietary intake of each food type using SPSS. Additional tests (PCA, PLS-DA, ROC Curves) were performed on MetaboAnalyst 4.0 (University of Alberta, Canada). More than 50 statistically significant (P < 0.05) food-metabolite correlations were detected, 15 of which remained significant after eliminating potential confounding from sex, age and BMI. The strongest correlations were between fruit consumption and acetic acid, and between dairy consumption and certain glycerophospholipids (e.g. LysoPC aa C20:3). Stratifying the cohort by gender yielded further correlations, including PC ae C38:2 (dairy; males), PC aa C34:4 (dairy; females), PC aa C36:4 (dairy; females) and trans-4-Hydroxyproline (meat; males). A number of potential blood-based food biomarkers were detected, many of which are gender-specific, and some are corroborated by previously published studies. However, further validation work is required. For example, biological plausibility needs to be established, and the findings need to be reproduced in other cohorts to demonstrate their applicability in larger and more diverse populations. These results contribute greatly to the ongoing efforts to discover and validate reliable nutritional biomarkers as an objective and unbiased measurement of food intake.
OBJECTIVES/SPECIFIC AIMS: Early life stress is known to greatly impact neurodevelopment during critical periods, conferring risk for various psychopathologies, including the onset and exacerbation of schizophrenia and anxiety disorders. The endocannabinoid system is highly integrated into the stress response and may be one means by which early life stress produces such deleterious effects. Using a naturalistic, ecologically valid animal model, this study explored interactions between the stress response and endocannabinoid systems within the cerebellum, a region dense with the CB1 endocannabinoid receptors and shown to be susceptible to stress. METHODS/STUDY POPULATION: This study explored behavioral and neural impacts of early life stress in Long-Evans rats reared with or without limited access to bedding material during postnatal day (PND) 2-9. Corticosterone (CORT) levels were measured at PND8 and 70. During PND50-70, rats were assessed on Novel Object Recognition to test memory, Rotarod to evaluate cerebellar integrity, Elevated Plus Maze to assay anxiety, Social Preference, and Eyeblink Conditioning, a cerebellar-dependent and endocannabinoid-mediated task. Lipid analysis was performed on PND70 tissue samples of cerebellar interpositus (IP) nucleus via high-performance liquid chromatography and tandem mass spectrometry. RESULTS/ANTICIPATED RESULTS: Both male and female rats experiencing early life stress exhibited significantly impaired recognition memory (N = 16-20/group). Female rats having undergone stress exhibited decreased social preference compared to normally reared females (N = 11/group). Stressed males showed facilitated eyblink conditioning compared to normally reared males (N = 7-9/group). There were no group differences in rotarod or elevated plus maze performance or CORT levels at PND8 or 70 across rearing groups. At PND70, male rats experiencing early life stress exhibited a significant decrease in 2-arachidonoyl glycerol (2-AG) and arachidonic acid levels in the IP nucleus compared to normally reared males (N = 8-9/group). Compared to normally reared females, those experiencing early life stress exhibited a significant increase in prostaglandin E2 levels in the IP nucleus (N = 6-7/group). DISCUSSION/SIGNIFICANCE OF IMPACT: Early life stress, induced by limited bedding, resulted in sex-specific behavioral and lipid impairments. Results suggest that stress causes long-term alterations in endocannabinoid dynamics in males in the cerebellar IP nucleus and sex-related lipids in female cerebellum. These changes may contribute to observed long-term behavioral aberrations. Moreover, findings suggest these behavioral changes may be the result of negative-feedback dysfunction (as evidenced by decreased endocannabinoids in males) or increased neural inflammation or proliferation (as evidenced by increased prostaglandins in females). Future analysis will quantify mRNA and protein for cannabinoid receptors to better characterize aberrations to this system. Moreover, other neural regions dense with cannabinoid receptors (i.e., PFC, hippocampus) will be investigated. This work provides a basis for understanding stress impacts on the development of cognitive deficits observed in psychotic and anxiety disorders. Specifically, facilitation of eyblink conditioning complements research in humans with anxiety disorders. Broadly, understanding stress-related endocannabinoid dysregulation may provide insights into risks for, and the development of, psychopathology and uncover novel therapeutic targets with high translational power.
OBJECTIVES/SPECIFIC AIMS: Using a novel biomechanical-based motor speech assessment alongside commonly used clinically-based motor speech assessments, the goal of this study was to describe longitudinal recovery in speech movements and functional speech in a cohort of 5 patients following facial transplantation. METHODS/STUDY POPULATION: Five participants who had received either full or partial face transplantation were included in this study. Each participant received a unique facial graft from their donor, which included varied amounts of soft tissue, facial musculature, nerve, and bone. Two participants were early in the recovery period and were assessed from zero to 24 months post-transplantation. Three participants were late in the recovery period and were assessed from 36 to 60 months post-transplantation. Each participant completed two data collection sessions and the average time between sessions was 20.4 months. At each session, orofacial movements were recorded using a 3D motion capture system. A 4-sensor head marker was used to subtract head movement (translation and rotation) from the facial markers. The analyses in this study were restricted to two markers: midline lower lip and a virtually calculated midline jaw marker. A marker at the top of the nose bridge was used as the origin point. The following kinematic variables were obtained from each lip-jaw movement time-series: peak movement speed (mm/s), and displacement (mm). Each patient was instructed to perform 10 repetitions of the phrase “buy bobby a puppy” at his or her typical speaking rate and volume. Sentence-level intelligibility was obtained using the Sentence Intelligibility Test (SIT) and word-level intelligibility was obtained using the Word Intelligibility Test, using standard procedures. Intelligibility, measured in percentage of words correctly transcribed, and speaking rate, measured in words per minute (wpm), was derived from the SIT sentences for each patient. Intelligibility, measured in percentage of words correctly chosen via multiple choice was derived from the Word Intelligibility Test. RESULTS/ANTICIPATED RESULTS: Effect sizes (Cohen’s d) across the 10 trials of “buy bobby a puppy” were computed to assess the effects of recovery time on range of motion and speed of the lower lip alone, the jaw alone, and the lower lip and jaw together for both range of motion and for speed. The largest effect sizes were observed for increased range of motion and increased speed of the articulators for participants within 24 months of surgery. Smaller effect sizes were observed for these parameters for the participants in the later stages of recovery, with some participants showing declines in range of motion and speed of some but not all articulators. Descriptive statistics indicate that both speech and word intelligibility improvements are most notable in the first two years following transplantation and appear to plateau during the later stages of recovery. Only two out of five of our participants achieved “normal” speech intelligibility (i.e., >97%) at five years post-transplantation. DISCUSSION/SIGNIFICANCE OF IMPACT: Biomechanical assessment revealed that kinematic recovery of articulator range of motion and speed appears most significant in the first two years following surgery, but that improvement continues to some degree as far as five-years post-transplant. Clinically-based assessments suggest that gains in intelligibility appear to plateau by 3-years post-surgery.
Mismatch negativity (MMN) is an event-related potential (ERP) component reflecting auditory predictive coding. Repeated standard tones evoke increasing positivity (‘repetition positivity’; RP), reflecting strengthening of the standard's memory trace and the prediction it will recur. Likewise, deviant tones preceded by more standard repetitions evoke greater negativity (‘deviant negativity’; DN), reflecting stronger prediction error signaling. These memory trace effects are also evident in MMN difference wave. Here, we assess group differences and test-retest reliability of these indices in schizophrenia patients (SZ) and healthy controls (HC).
Methods
Electroencephalography was recorded twice, 2 weeks apart, from 43 SZ and 30 HC, during a roving standard paradigm. We examined ERPs to the third, eighth, and 33rd standards (RP), immediately subsequent deviants (DN), and the corresponding MMN. Memory trace effects were assessed by comparing amplitudes associated with the three standard repetition trains.
Results
Compared with controls, SZ showed reduced MMNs and DNs, but normal RPs. Both groups showed memory trace effects for RP, MMN, and DN, with a trend for attenuated DNs in SZ. Intraclass correlations obtained via this paradigm indicated good-to-moderate reliabilities for overall MMN, DN and RP, but moderate to poor reliabilities for components associated with short, intermediate, and long standard trains, and poor reliability of their memory trace effects.
Conclusion
MMN deficits in SZ reflected attenuated prediction error signaling (DN), with relatively intact predictive code formation (RP) and memory trace effects. This roving standard MMN paradigm requires additional development/validation to obtain suitable levels of reliability for use in clinical trials.
Returning genomic research results to family members raises complex questions. Genomic research on life-limiting conditions such as cancer, and research involving storage and reanalysis of data and specimens long into the future, makes these questions pressing. This author group, funded by an NIH grant, published consensus recommendations presenting a framework. This follow-up paper offers concrete guidance and tools for implementation. The group collected and analyzed relevant documents and guidance, including tools from the Clinical Sequencing Exploratory Research (CSER) Consortium. The authors then negotiated a consensus toolkit of processes and documents. That toolkit offers sample consent and notification documents plus decision flow-charts to address return of results to family of living and deceased participants, in adult and pediatric research. Core concerns are eliciting participant preferences on sharing results with family and on choice of a representative to make decisions about sharing after participant death.
Postoperative delirium has been associated with poorer long term survival in Transcatheter aortic valve replacement (TAVR) and Surgival aortic valve replacement (SAVR) patients. However, its effect on hospitalization costs and length of stay in these populations has not been formally assessed.
METHODS:
Using the Medicare Provider Analysis and Review File, we retrospectively analyzed elderly (80 years of age and older) Medicare patients receiving TAVR and SAVR in the United States during the 2015 fiscal year. ICD-9-CM codes were used to identify postoperative delirium diagnoses. The incremental hospital resource consumption, measured as hospital cost and length of stay, was estimated for patients with postoperative delirium during their TAVR or SAVR index hospitalization. Multivariate regression models were used for the adjusted cost estimates controlling for patient demographics, comorbidities, and complications.
RESULTS:
A total of 21,088 claims were available for analysis (12,114 TAVR and 8,974 SAVR). The mean age of the TAVR group was older compared to the SAVR group (87 versus 84; p < .001) and TAVR patients presented with a higher comorbidity burden (Charlson Index score 3.0 versus 2.1; p < .0001). TAVR patients experiencing postoperative delirium during the index hospitalization was 1.6 percent compared to 3.6 percent of surgical patients (p < .0001). For the overall cohort, the regression adjusted incremental cost of postoperative delirium was (USD15,592; p < .0001). Patients experiencing delirium also had significantly longer hospital length of stay (4.16 days; p < .0001). When stratified by treatment approach, the adjusted incremental cost was USD13,862 for TAVR (p < .0001) and USD16,656 for SAVR (p < .0001).
CONCLUSIONS:
While infrequent, postoperative delirium significantly increased hospital cost and length of stay following transcatheter or surgical aortic valve replacement (AVR). Despite a significantly higher comorbidity burden, TAVR was associated with lower postoperative delirium rates compared to SAVR. Moreover, post-TAVR delirium may be associated with less resource consumption than post-SAVR delirium. Future studies should seek to determine whether general anesthesia avoidance in appropriately selected transfemoral TAVR patients can further decrease rates of delirium.
The diagnosis of dementia remains inadequate, even within clinical settings. Data on rates and degree of impairment among inpatients are vital for service planning and the provision of appropriate patient care as Ireland's population ages.
Methods:
Every patient aged 65 years and over admitted over a two-week period was invited to participate. Those who met inclusion criteria were screened for delirium then underwent cognitive screening. Demographic, functional, and outcome data were obtained from medical records, participants, and family.
Results:
Consent to participate was obtained from 68.6% of the eligible population. Data for 143 patients were obtained. Mean age 78.1 years. 27.3% met criteria for dementia and 21% had mild cognitive impairment (MCI). Only 41% of those with dementia and 10% of those with MCI had a previously documented impairment. Between-group analysis showed differences in length of stay (p = 0.003), number of readmissions in 12 months (p = 0.036), and likelihood of returning home (p = 0.039) between the dementia and normal groups. MCI outcomes were similar to the normal group. No difference was seen for one-year mortality. Effects were less pronounced on multivariate analysis but continued to show a significant effect on length of stay even after controlling for demographics, personal and family history, and anxiety and depression screening scores. Patients with dementia remained in hospital 15.3 days longer (p = 0.047). A diagnosis is the single biggest contributing factor to length of stay in our regression model.
Conclusions:
Cognitive impairment is pervasive and under-recognized in the acute hospital and impacts negatively on patient outcomes.
To examine variation in antibiotic coverage and detection of resistant pathogens in community-onset pneumonia.
DESIGN
Cross-sectional study.
SETTING
A total of 128 hospitals in the Veterans Affairs health system.
PARTICIPANTS
Hospitalizations with a principal diagnosis of pneumonia from 2009 through 2010.
METHODS
We examined proportions of hospitalizations with empiric antibiotic coverage for methicillin-resistant Staphylococcus aureus (MRSA) and Pseudomonas aeruginosa (PAER) and with initial detection in blood or respiratory cultures. We compared lowest- versus highest-decile hospitals, and we estimated adjusted probabilities (AP) for patient- and hospital-level factors predicting coverage and detection using hierarchical regression modeling.
RESULTS
Among 38,473 hospitalizations, empiric coverage varied widely across hospitals (MRSA lowest vs highest, 8.2% vs 42.0%; PAER lowest vs highest, 13.9% vs 44.4%). Detection rates also varied (MRSA lowest vs highest, 0.5% vs 3.6%; PAER lowest vs highest, 0.6% vs 3.7%). Whereas coverage was greatest among patients with recent hospitalizations (AP for anti-MRSA, 54%; AP for anti-PAER, 59%) and long-term care (AP for anti-MRSA, 60%; AP for anti-PAER, 66%), detection was greatest in patients with a previous history of a positive culture (AP for MRSA, 7.9%; AP for PAER, 11.9%) and in hospitals with a high prevalence of the organism in pneumonia (AP for MRSA, 3.9%; AP for PAER, 3.2%). Low hospital complexity and rural setting were strong negative predictors of coverage but not of detection.
CONCLUSIONS
Hospitals demonstrated widespread variation in both coverage and detection of MRSA and PAER, but probability of coverage correlated poorly with probability of detection. Factors associated with empiric coverage (eg, healthcare exposure) were different from those associated with detection (eg, microbiology history). Providing microbiology data during empiric antibiotic decision making could better align coverage to risk for resistant pathogens and could promote more judicious use of broad-spectrum antibiotics.
To determine the source of a healthcare-associated outbreak of Pantoea agglomerans bloodstream infections.
DESIGN
Epidemiologic investigation of the outbreak.
SETTING
Oncology clinic (clinic A).
METHODS
Cases were defined as Pantoea isolation from blood or catheter tip cultures of clinic A patients during July 2012–May 2013. Clinic A medical charts and laboratory records were reviewed; infection prevention practices and the facility’s water system were evaluated. Environmental samples were collected for culture. Clinical and environmental P. agglomerans isolates were compared using pulsed-field gel electrophoresis.
RESULTS
Twelve cases were identified; median (range) age was 65 (41–78) years. All patients had malignant tumors and had received infusions at clinic A. Deficiencies in parenteral medication preparation and handling were identified (eg, placing infusates near sinks with potential for splash-back contamination). Facility inspection revealed substantial dead-end water piping and inadequate chlorine residual in tap water from multiple sinks, including the pharmacy clean room sink. P. agglomerans was isolated from composite surface swabs of 7 sinks and an ice machine; the pharmacy clean room sink isolate was indistinguishable by pulsed-field gel electrophoresis from 7 of 9 available patient isolates.
CONCLUSIONS
Exposure of locally prepared infusates to a contaminated pharmacy sink caused the outbreak. Improvements in parenteral medication preparation, including moving chemotherapy preparation offsite, along with terminal sink cleaning and water system remediation ended the outbreak. Greater awareness of recommended medication preparation and handling practices as well as further efforts to better define the contribution of contaminated sinks and plumbing deficiencies to healthcare-associated infections are needed.
We sought to conduct a major objective of the CAEP Academic Section, an environmental scan of the academic emergency medicine programs across the 17 Canadian medical schools.
Methods
We developed an 84-question questionnaire, which was distributed to academic heads. The responses were validated by phone by the lead author to ensure that the questions were answered completely and consistently. Details of pediatric emergency medicine units were excluded from the scan.
Results
At eight of 17 universities, emergency medicine has full departmental status and at two it has no official academic status. Canadian academic emergency medicine is practiced at 46 major teaching hospitals and 13 specialized pediatric hospitals. Another 69 Canadian hospital EDs regularly take clinical clerks and emergency medicine residents. There are 31 full professors of emergency medicine in Canada. Teaching programs are strong with clerkships offered at 16/17 universities, CCFP(EM) programs at 17/17, and RCPSC residency programs at 14/17. Fourteen sites have at least one physician with a Master’s degree in education. There are 55 clinical researchers with salary support at 13 universities. Sixteen sites have published peer-reviewed papers in the past five years, ranging from four to 235 per site. Annual budgets range from $200,000 to $5,900,000.
Conclusion
This comprehensive review of academic activities in emergency medicine across Canada identifies areas of strengths as well as opportunities for improvement. CAEP and the Academic Section hope we can ultimately improve ED patient care by sharing best academic practices and becoming better teachers, educators, and researchers.
The debate about how to manage individual research results and incidental findings in genetic and genomic research has focused primarily on what information, if any, to offer back to research participants. However, increasing controversy surrounds the question of whether researchers have any responsibility to offer a participant’s results (defined here to include both individual research results and incidental findings) to the participant’s relatives, including after the participant’s death. This question arises in multiple contexts, including when researchers discover a result with potentially important health implications for genetic relatives, when a participant’s relatives ask a researcher whether any research results about the participant have implications for their own health or reproductive planning, when a participant’s relative asks whether any of the participant’s results have implications for a child’s health, and when the participant is deceased and the participant’s relatives seek information about the participant’s genetic results in order to address their own health or reproductive concerns.