We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
OBJECTIVES/GOALS: Fluid boluses are administered to hypotensive, critically ill children but may not reverse hypotension, leading to delay of vasoactive infusion, end-organ damage, and mortality. We hypothesize that a machine learning-based model will predict which children will have sustained response to fluid bolus. METHODS/STUDY POPULATION: We will conduct a single-center retrospective observational cohort study of hypotensive critically ill children who received intravenous isotonic fluid of at least 10 ml/kg within 72 hours of pediatric intensive care unit admission between 2013 and 2023. We will extract physiologic variables from stored bedside monitors data and clinical variables from the EHR. Fluid responsive (FR) will be defined as a MAP increase by 310%. We will construct elastic net, random forest, and a long short-term memory models to predict FR. We will compare complicated course (multiple organ dysfunction on day 7 or death by day 28) between: 1) FRs and non-FRs, 2) predicted FRs and non-FRs, 3), FRs and non-FRs stratified by race/ethnicity, and 4) FRs and non-FRs stratified by sex as a biologic variable. RESULTS/ANTICIPATED RESULTS: We anticipate approximately 800 critically ill children will receive 2,000 intravenous isotonic fluid boluses, with a 60% rate of FR. We anticipate being able to complete all three models. We hypothesize that the model with the best performance will be the long short-term memory model and the easiest to interpret will be the tree-based random forest model. We hypothesize non-FRs will have a higher complicated course than FRs and that predicted non-FRs will have a higher rate of complicated course than FRs. Based on previous adult studies, we hypothesize that there will be a higher rate of complicated course in patients of black race and/or Hispanic ethnicity when compared to non-Hispanic white patients. We also hypothesize that there will be no difference in complicated course when comparing sex as a biologic variable. DISCUSSION/SIGNIFICANCE: We have a critical need for easily-deployed, real-time prediction of fluid response to personalize and improve resuscitation for children in shock. We anticipate the clinical application of such a model will decrease time with hypotension for critically ill children, leading to decreased morbidity and mortality.
People with schizophrenia (PSZ) are impaired in attentional prioritization of non-salient but relevant stimuli over salient distractors during visual working memory (VWM) encoding. Conversely, guidance of top–down attention by external predictive cues is intact. Yet, it is unknown whether this preserved ability can help PSZ encode more information in the presence of salient distractors.
Methods
We employed a visuospatial change-detection task using four Gabor patches with differing orientations in 66 PSZ and 74 healthy controls (HCS). Two Gabor patches flickered which were designated either as targets or distractors and either a predictive or a non-predictive cue was displayed to manipulate top–down attention, resulting in four conditions.
Results
We observed significant effects of group, salience and cue as well as significant interactions of salience by cue, group by salience and group by cue. Across all conditions, PSZ stored significantly less information in VWM than HCS. PSZ stored significantly less non-flickering than flickering information with a non-predictive cue. However, PSZ stored significantly more flickering and non-flickering information with a predictive cue.
Conclusions
Our findings indicate that control of attentional selection is impaired in schizophrenia. We demonstrate that additional top–down information significantly improves performance in PSZ. The observed deficit in attentional control suggests a disturbance of GABAergic inhibition in early visual areas. Moreover, our findings are indicative of a mechanism for enhancing attentional control in PSZ, which could be utilized by pro-cognitive interventions. Thus, the current paradigm is suitable to reveal both preserved and compromised cognitive component processes in schizophrenia.
Newcastle disease (ND) is a notifiable disease affecting chickens and other avian species caused by virulent strains of Avian paramyxovirus type 1 (APMV-1). While outbreaks of ND can have devastating consequences, avirulent strains of APMV-1 generally cause subclinical infections or mild disease. However, viruses can cause different levels of disease in different species and virulence can evolve following cross-species transmission events. This report describes the detection of three cases of avirulent APMV-1 infection in Great Britain (GB). Case 1 emerged from the ‘testing to exclude’ scheme in chickens in Shropshire while cases 2 and 3 were made directly from notifiable avian disease investigations in chicken broilers in Herefordshire and on premises in Wiltshire containing ducks and mixed species, respectively). Class II/genotype I.1.1 APMV-1 from case 1 shared 99.94% identity to the Queensland V4 strain of APMV-1. Class II/genotype II APMV-1 was detected from case 2 while the class II/genotype I.2 virus from case 3 aligned closely with strains isolated from Anseriformes. Exclusion of ND through rapid detection of avirulent APMV-1 is important where clinical signs caused by avirulent or virulent APMV-1s could be ambiguous. Understanding the diversity of APMV-1s circulating in GB is critical to understanding disease threat from these adaptable viruses.
To evaluate the incidence of inadvertent parathyroidectomy, identify risk factors, determine the location of inadvertently excised glands, review pathology reporting in inadvertent parathyroidectomy, and explore relationships between inadvertent parathyroidectomy and post-surgical hypoparathyroidism or hypocalcaemia.
Methods
A retrospective cohort study of 899 thyroidectomies between 2015 and 2020 was performed. Histopathology slides of patients who had an inadvertent parathyroidectomy and a random sample of patients without a reported inadvertent parathyroidectomy were reviewed.
Results
Inadvertent parathyroidectomy occurred in 18.5 per cent of thyroidectomy patients. Central neck dissection was an independent risk factor (inadvertent parathyroidectomy = 49.4 per cent with central neck dissection, 12.0 per cent without central neck dissection, p < 0.001). Most excised parathyroid glands were extracapsular (53.3 per cent), followed by subcapsular (29.1 per cent) and intrathyroidal (10.9 per cent). Parathyroid tissue was found in 10.2 per cent of specimens where no inadvertent parathyroidectomy was reported. Inadvertent parathyroidectomy was associated with a higher incidence of six-month post-surgical hypoparathyroidism or hypocalcaemia (19.8 per cent who had an inadvertent parathyroidectomy, 7.7 per cent without inadvertent parathyroidectomy).
Conclusion
Inadvertent parathyroidectomy increases the risk of post-surgical hypoparathyroidism or hypocalcaemia. The proportion of extracapsular glands contributing to inadvertent parathyroidectomy highlights the need for preventative measures.
We investigated seroprevalence and factors associated with Leptospira spp. infections in humans in rural Northern Germany. Sera of 450 participants were tested for leptospira-reactive IgG antibodies by two enzyme-linked immunosorbent assays (ELISA). A narrow (specific) and a broad (sensitive) case definition were applied and results compared in the analysis. Personal data were collected via questionnaire and associations with the serostatus were investigated by multivariable logistic regression. The seroprevalence estimates were 1.6% (95%-confidence interval (CI) = 0.63–3.2) under the narrow and 4.2% (95%-CI = 2.6–6.5%) under the broad case definition. Few (14%) participants knew about the pathogen. No seropositive participant recalled a prior leptospirosis diagnosis. Spending more than two hours a week in the forest was significantly associated with anti-leptospira IgG in both models (broad case definition: adjusted odds ratio (aOR) = 2.8, 95%-CI = 1.2–9.1; narrow case definition: aOR = 11.1, 95%-CI = 1.3–97.1). Regular cleaning of storage rooms was negatively associated in the broad (aOR = 0.17, 95%-CI = 0.03–0.98) and touching a dead rodent in the past 10 years in the narrow case definition model (aOR = 0.23, 95%-CI = 0.05–1.04). Our findings support risk factors identified in previous investigations. To counter the low awareness for the pathogen, we recommend that health authorities communicate risks and preventive measures to the public by using target-group specific channels.
Although maternal stressor exposure has been associated with shorter telomere length (TL) in offspring, this literature is based largely on White samples. Furthermore, timing of maternal stressors has rarely been examined. Here, we examined how maternal stressors occurring during adolescence, pregnancy, and across the lifespan related to child TL in Black and White mothers.
Method
Mothers (112 Black; 110 White; Mage = 39) and their youngest offspring (n = 222; Mage = 8) were part of a larger prospective cohort study, wherein mothers reported their stressors during adolescence (assessed twice during adolescence for the past year), pregnancy (assessed in midlife for most recent pregnancy), and across their lifespan (assessed in midlife). Mother and child provided saliva for TL measurement. Multiple linear regression models examined the interaction of maternal stressor exposure and race in relation to child TL, controlling for maternal TL and child gender and age. Race-stratified analyses were also conducted.
Results
Neither maternal adolescence nor lifespan stressors interacted with race in relation to child TL. In contrast, greater maternal pregnancy stressors were associated with shorter child TL, but this effect was present for children of White but not Black mothers. Moreover, this effect was significant for financial but not social pregnancy stressors. Race-stratified models revealed that greater financial pregnancy stressors predicted shorter telomeres in offspring of White, but not Black mothers.
Conclusions
Race and maternal stressors interact and are related to biological aging across generations, but these effects are specific to certain races, stressors, and exposure time periods.
Health services research (HSR) is affected by a widespread problem related to service terminology including non-commensurability (using different units of analysis for comparisons) and terminological unclarity due to ambiguity and vagueness of terms. The aim of this study was to identify the magnitude of the terminological bias in health and social services research and health economics by applying an international classification system.
Methods
This study, that was part of the PECUNIA project, followed an ontoterminology approach (disambiguation of technical and scientific terms using a taxonomy and a glossary of terms). A listing of 56 types of health and social services relevant for mental health was compiled from a systematic review of the literature and feedback provided by 29 experts in six European countries. The disambiguation of terms was performed using an ontology-based classification of services (Description and Evaluation of Services and DirectoriEs – DESDE), and its glossary of terms. The analysis focused on the commensurability and the clarity of definitions according to the reference classification system. Interrater reliability was analysed using κ.
Results
The disambiguation revealed that only 13 terms (23%) of the 56 services selected were accurate. Six terms (11%) were confusing as they did not correspond to services as defined in the reference classification system (non-commensurability bias), 27 (48%) did not include a clear definition of the target population for which the service was intended, and the definition of types of services was unclear in 59% of the terms: 15 were ambiguous and 11 vague. The κ analyses were significant for agreements in unit of analysis and assignment of DESDE codes and very high in definition of target population.
Conclusions
Service terminology is a source of systematic bias in health service research, and certainly in mental healthcare. The magnitude of the problem is substantial. This finding has major implications for the international comparability of resource use in health economics, quality and equality research. The approach presented in this paper contributes to minimise differentiation between services by taking into account key features such as target population, care setting, main activities and type and number of professionals among others. This approach also contributes to support financial incentives for effective health promotion and disease prevention. A detailed analysis of services in terms of cost measurement for economic evaluations reveals the necessity and usefulness of defining services using a coding system and taxonomical criteria rather than by ‘text-based descriptions’.
Background: As the second leading cause of years lived with disability in the world, and the first in people under 50, migraine represents a major burden to healthcare systems. This study examined treatment patterns and healthcare resource utilization (HRU) in patients with migraine using real-world data from Alberta. Methods: This was a retrospective cohort study of patients with ≥1 ICD-9-CM/ICD-10-CA code for migraine or ≥1 prescription for a triptan from April 1st, 2012 to March 31st, 2018. Descriptive statistics were used to characterize the study outcomes. Results: The incidence of migraine exceeded 1,000 cases per 100,000 person-years over the study period. The mean age of the cohort (n=199,931) was 40.0, and 72.3% were women. Migraine-related HRU accounted for 3%-10% of all HRU across endpoints (e.g., ED visits, hospitalization, physician visits). One-third of the cohort were prescribed acute medications (non-steroidal anti-inflammatories, triptans or other (including opioids)), whereas fewer than one-fifth were prescribed at least one migraine preventive such as tricyclic anti-depressants (proportion: 15%), anti-convulsants (13%), beta-blockers (7%), or neurotoxins (4%). Conclusions: The low medication prescription rates and high HRU indicates the potential unmet need and high disability in patients with migraine. The impact of migraine treatment patterns on HRU is an avenue for future research.
The shift in learning environments due to the COVID-19 pandemic necessitates a closer look at course design, faculty approaches to teaching, and student interaction, all of which may predict learner achievement and satisfaction. Transitioning to an online environment requires the reinvention, reimagining, and applying of “e-flavors” of general learning theory. With this shift to online learning comes the opportunity for misunderstandings and “myths” to occur, which may stand in the way of faculty embracing online learning and fully realizing its potential. This article seeks to address several myths and misconceptions that have arisen in higher education during the rapid shift to online teaching and learning. While not comprehensive, these myths represent a snapshot of common challenges. These are we can transfer our in-person course design to online; adult learners do not need an empathetic approach; and online teaching and learning is socially isolating. Through an appreciative inquiry framework, we present each myth in the context of relevant literature and invite faculty with varied online teaching experience to share their own case studies that illustrate how they have “busted” these myths with the goal to identify existing examples of locally effective practices for the express purpose of replication that leads to positive change.
Childhood trauma (CT) increases the risk of adult depression. Buffering effects require an understanding of the underlying persistent risk pathways. This study examined whether daily psychological stress processes – how an individual interprets and affectively responds to minor everyday events – mediate the effect of CT on adult depressive symptoms.
Methods
Middle-aged women (N = 183) reported CT at baseline and completed daily diaries of threat appraisals and negative evening affect for 7 days at baseline, 9, and 18 months. Depressive symptoms were measured across the 1.5-year period. Mediation was examined using multilevel structural equation modeling.
Results
Reported CT predicted greater depressive symptoms over the 1.5-year time period (estimate = 0.27, s.e. = 0.07, 95% CI 0.15–0.38, p < 0.001). Daily threat appraisals and negative affect mediated the effect of reported CT on depressive symptoms (estimate = 0.34, s.e. = 0.08, 95% CI 0.22–0.46, p < 0.001). Daily threat appraisals explained more than half of this effect (estimate = 0.19, s.e. = 0.07, 95% CI 0.08–0.30, p = 0.004). Post hoc analyses in individuals who reported at least moderate severity of CT showed that lower threat appraisals buffered depressive symptoms. A similar pattern was found in individuals who reported no/low severity of CT.
Conclusions
A reported history of CT acts as a latent vulnerability, exaggerating threat appraisals of everyday events, which trigger greater negative evening affect – processes that have important mental health consequences and may provide malleable intervention targets.
ENT presentations are prevalent in clinical practice but feature little in undergraduate curricula. Consequently, most medical graduates are not confident managing common ENT conditions. In 2014, the first evidence-based ENT undergraduate curriculum was published to guide medical schools.
Objective
To assess the extent that current UK medical school learning outcomes correlate with the syllabus of the ENT undergraduate curriculum.
Method
Two students from each participating medical school independently reviewed all ENT-related curriculum documents to determine whether learning outcomes from the suggested curriculum were met.
Results
Sixteen of 34 curricula were reviewed. Only a minority of medical schools delivered teaching on laryngectomy or tracheostomy, nasal packing or cautery, and ENT medications or surgical procedures.
Conclusion
There is wide variability in ENT undergraduate education in UK medical schools. Careful consideration of which topics are prioritised, and the teaching modalities utilised, is essential. In addition, ENT learning opportunities for undergraduates outside of the medical school curriculum should be augmented.
The European Union Free Movement Directive gives professionals the opportunity to work and live within the European Union, but does not give specific requirements regarding how the specialists in medicine have to be trained, with the exception of a required minimum of 4 years of education. Efforts have been undertaken to harmonize post-graduate training in psychiatry in Europe since the Treaty of Rome 1957, with the founding of the UEMS (European Union of Medical Specialists) and establishment of a charter outlining how psychiatrists should be trained. However, the different curricula for post-graduate training were only compared by surveys, never through a systematic review of the official national requirements. The published survey data still show great differences between European countries and unlike other UEMS Boards, the Board of Psychiatry did not introduce a certification for specialists willing to practice in a foreign country within Europe. Such a European certification could help to keep a high qualification level for post-graduate training in psychiatry all over Europe. Moreover, it would make it easier for employers to assess the educational level of European psychiatrists applying for a job in their field.
Otolaryngology is under-represented in UK medical schools. This presents challenges in terms of exposing students to the diversity of otolaryngology, as well as ‘showcasing’ the specialty as a career option. This study aimed to audit the impact of a change in the delivery of final year tuition on student satisfaction.
Method
Participants were final year medical students completing a 2-day otolaryngology placement. A novel teaching programme was developed in response to feedback from students who completed a baseline teaching programme. The novel programme was evaluated over a 10-week period using questionnaires.
Results
Fifty-eight participants completed the novel programme questionnaire. Overall, there was a positive impact on student satisfaction. Students completing the novel programme expressed a desire for increased otolaryngology placement.
Conclusion
This approach is an effective means of teaching otolaryngology to undergraduates. A mutual desire for greater exposure to otolaryngology in the undergraduate curriculum is held by medical students and otolaryngologists.
Translocation and rehabilitation programmes are critical tools for wildlife conservation. These methods achieve greater impact when integrated in a combined strategy for enhancing population or ecosystem restoration. During 2002–2016 we reared 37 orphaned southern sea otter Enhydra lutris nereis pups, using captive sea otters as surrogate mothers, then released them into a degraded coastal estuary. As a keystone species, observed increases in the local sea otter population unsurprisingly brought many ecosystem benefits. The role that surrogate-reared otters played in this success story, however, remained uncertain. To resolve this, we developed an individual-based model of the local population using surveyed individual fates (survival and reproduction) of surrogate-reared and wild-captured otters, and modelled estimates of immigration. Estimates derived from a decade of population monitoring indicated that surrogate-reared and wild sea otters had similar reproductive and survival rates. This was true for males and females, across all ages (1–13 years) and locations evaluated. The model simulations indicated that reconstructed counts of the wild population are best explained by surrogate-reared otters combined with low levels of unassisted immigration. In addition, the model shows that 55% of observed population growth over this period is attributable to surrogate-reared otters and their wild progeny. Together, our results indicate that the integration of surrogacy methods and reintroduction of juvenile sea otters helped establish a biologically successful population and restore a once-impaired ecosystem.
There is growing concern over a future shortfall in provision of UK otolaryngology consultants. There is a declining rate of applications to otolaryngology specialty training in the UK.
Objective
This study aimed to systematically review the literature to establish what factors influence medical students’ and junior doctors’ decision to pursue a career in otolaryngology.
Method
Medline, Embase and PubMed databases were searched in January 2019. Additional manual reference checks of identified literature were performed.
Results
Eleven articles were included in the review. Common factors that positively influenced the decision to pursue a career in otolaryngology were exposure to the specialty, positive role models and a good work-life balance. Lack of exposure was a consistent deterrent from pursuing a career in otolaryngology.
Conclusion
This review reiterates the need for greater exposure to otolaryngology in the undergraduate curriculum. In addition, mentorship for students with an interest in otolaryngology should be a priority.
In recent years, the discovery of massive quasars at $z\sim7$ has provided a striking challenge to our understanding of the origin and growth of supermassive black holes in the early Universe. Mounting observational and theoretical evidence indicates the viability of massive seeds, formed by the collapse of supermassive stars, as a progenitor model for such early, massive accreting black holes. Although considerable progress has been made in our theoretical understanding, many questions remain regarding how (and how often) such objects may form, how they live and die, and how next generation observatories may yield new insight into the origin of these primordial titans. This review focusses on our present understanding of this remarkable formation scenario, based on the discussions held at the Monash Prato Centre from November 20 to 24, 2017, during the workshop ‘Titans of the Early Universe: The Origin of the First Supermassive Black Holes’.
Excavations at the Pre-Pottery Neolithic B ritual site of Naḥal Roded 110 in the Southern Negev, Israel, have revealed evidence—unique to this region—for on-site flint knapping and abundant raptor remains.