We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many collaborative online projects such as Wikipedia and OpenStreetMap organize collaboration among their contributors sequentially. In sequential collaboration, one contributor creates an entry which is then consecutively encountered by other contributors who decide whether to adjust or maintain the presented entry. For numeric and geographical judgments, sequential collaboration yields improved judgments over the course of a sequential chain and results in accurate final estimates. We hypothesize that these benefits emerge since contributors adjust entries according to their expertise, implying that judgments of experts have a larger impact compared with those of novices. In three preregistered studies, we measured and manipulated expertise to investigate whether expertise leads to higher change probabilities and larger improvements in judgment accuracy. Moreover, we tested whether expertise results in an increase in accuracy over the course of a sequential chain. As expected, experts adjusted entries more frequently, made larger improvements, and contributed more to the final estimates of sequential chains. Overall, our findings suggest that the high accuracy of sequential collaboration is due to an implicit weighting of judgments by expertise.
To evaluate the incidence of inadvertent parathyroidectomy, identify risk factors, determine the location of inadvertently excised glands, review pathology reporting in inadvertent parathyroidectomy, and explore relationships between inadvertent parathyroidectomy and post-surgical hypoparathyroidism or hypocalcaemia.
Methods
A retrospective cohort study of 899 thyroidectomies between 2015 and 2020 was performed. Histopathology slides of patients who had an inadvertent parathyroidectomy and a random sample of patients without a reported inadvertent parathyroidectomy were reviewed.
Results
Inadvertent parathyroidectomy occurred in 18.5 per cent of thyroidectomy patients. Central neck dissection was an independent risk factor (inadvertent parathyroidectomy = 49.4 per cent with central neck dissection, 12.0 per cent without central neck dissection, p < 0.001). Most excised parathyroid glands were extracapsular (53.3 per cent), followed by subcapsular (29.1 per cent) and intrathyroidal (10.9 per cent). Parathyroid tissue was found in 10.2 per cent of specimens where no inadvertent parathyroidectomy was reported. Inadvertent parathyroidectomy was associated with a higher incidence of six-month post-surgical hypoparathyroidism or hypocalcaemia (19.8 per cent who had an inadvertent parathyroidectomy, 7.7 per cent without inadvertent parathyroidectomy).
Conclusion
Inadvertent parathyroidectomy increases the risk of post-surgical hypoparathyroidism or hypocalcaemia. The proportion of extracapsular glands contributing to inadvertent parathyroidectomy highlights the need for preventative measures.
Health services research (HSR) is affected by a widespread problem related to service terminology including non-commensurability (using different units of analysis for comparisons) and terminological unclarity due to ambiguity and vagueness of terms. The aim of this study was to identify the magnitude of the terminological bias in health and social services research and health economics by applying an international classification system.
Methods
This study, that was part of the PECUNIA project, followed an ontoterminology approach (disambiguation of technical and scientific terms using a taxonomy and a glossary of terms). A listing of 56 types of health and social services relevant for mental health was compiled from a systematic review of the literature and feedback provided by 29 experts in six European countries. The disambiguation of terms was performed using an ontology-based classification of services (Description and Evaluation of Services and DirectoriEs – DESDE), and its glossary of terms. The analysis focused on the commensurability and the clarity of definitions according to the reference classification system. Interrater reliability was analysed using κ.
Results
The disambiguation revealed that only 13 terms (23%) of the 56 services selected were accurate. Six terms (11%) were confusing as they did not correspond to services as defined in the reference classification system (non-commensurability bias), 27 (48%) did not include a clear definition of the target population for which the service was intended, and the definition of types of services was unclear in 59% of the terms: 15 were ambiguous and 11 vague. The κ analyses were significant for agreements in unit of analysis and assignment of DESDE codes and very high in definition of target population.
Conclusions
Service terminology is a source of systematic bias in health service research, and certainly in mental healthcare. The magnitude of the problem is substantial. This finding has major implications for the international comparability of resource use in health economics, quality and equality research. The approach presented in this paper contributes to minimise differentiation between services by taking into account key features such as target population, care setting, main activities and type and number of professionals among others. This approach also contributes to support financial incentives for effective health promotion and disease prevention. A detailed analysis of services in terms of cost measurement for economic evaluations reveals the necessity and usefulness of defining services using a coding system and taxonomical criteria rather than by ‘text-based descriptions’.
ENT presentations are prevalent in clinical practice but feature little in undergraduate curricula. Consequently, most medical graduates are not confident managing common ENT conditions. In 2014, the first evidence-based ENT undergraduate curriculum was published to guide medical schools.
Objective
To assess the extent that current UK medical school learning outcomes correlate with the syllabus of the ENT undergraduate curriculum.
Method
Two students from each participating medical school independently reviewed all ENT-related curriculum documents to determine whether learning outcomes from the suggested curriculum were met.
Results
Sixteen of 34 curricula were reviewed. Only a minority of medical schools delivered teaching on laryngectomy or tracheostomy, nasal packing or cautery, and ENT medications or surgical procedures.
Conclusion
There is wide variability in ENT undergraduate education in UK medical schools. Careful consideration of which topics are prioritised, and the teaching modalities utilised, is essential. In addition, ENT learning opportunities for undergraduates outside of the medical school curriculum should be augmented.
The European Union Free Movement Directive gives professionals the opportunity to work and live within the European Union, but does not give specific requirements regarding how the specialists in medicine have to be trained, with the exception of a required minimum of 4 years of education. Efforts have been undertaken to harmonize post-graduate training in psychiatry in Europe since the Treaty of Rome 1957, with the founding of the UEMS (European Union of Medical Specialists) and establishment of a charter outlining how psychiatrists should be trained. However, the different curricula for post-graduate training were only compared by surveys, never through a systematic review of the official national requirements. The published survey data still show great differences between European countries and unlike other UEMS Boards, the Board of Psychiatry did not introduce a certification for specialists willing to practice in a foreign country within Europe. Such a European certification could help to keep a high qualification level for post-graduate training in psychiatry all over Europe. Moreover, it would make it easier for employers to assess the educational level of European psychiatrists applying for a job in their field.
Otolaryngology is under-represented in UK medical schools. This presents challenges in terms of exposing students to the diversity of otolaryngology, as well as ‘showcasing’ the specialty as a career option. This study aimed to audit the impact of a change in the delivery of final year tuition on student satisfaction.
Method
Participants were final year medical students completing a 2-day otolaryngology placement. A novel teaching programme was developed in response to feedback from students who completed a baseline teaching programme. The novel programme was evaluated over a 10-week period using questionnaires.
Results
Fifty-eight participants completed the novel programme questionnaire. Overall, there was a positive impact on student satisfaction. Students completing the novel programme expressed a desire for increased otolaryngology placement.
Conclusion
This approach is an effective means of teaching otolaryngology to undergraduates. A mutual desire for greater exposure to otolaryngology in the undergraduate curriculum is held by medical students and otolaryngologists.
There is growing concern over a future shortfall in provision of UK otolaryngology consultants. There is a declining rate of applications to otolaryngology specialty training in the UK.
Objective
This study aimed to systematically review the literature to establish what factors influence medical students’ and junior doctors’ decision to pursue a career in otolaryngology.
Method
Medline, Embase and PubMed databases were searched in January 2019. Additional manual reference checks of identified literature were performed.
Results
Eleven articles were included in the review. Common factors that positively influenced the decision to pursue a career in otolaryngology were exposure to the specialty, positive role models and a good work-life balance. Lack of exposure was a consistent deterrent from pursuing a career in otolaryngology.
Conclusion
This review reiterates the need for greater exposure to otolaryngology in the undergraduate curriculum. In addition, mentorship for students with an interest in otolaryngology should be a priority.
Since US President Donald J. Trump took office in January 2017, the future of the global economy has looked distinctly uncertain. This is not because a process of clear and purposeful change can be said to be underway. Instead, it is because of a pattern of piecemeal, inconsistent and contradictory fragments of policy, both domestic and international in orientation, in the arenas of trade, taxation, business relations, finance and banking, social and welfare provision, immigration, and environmental protection, whose cumulative significance remains unclear. The modest task of this essay is therefore to sketch the contours, patterns, inconsistencies and confusions presented by the Trump administration's approach to shaping the US economy and, by extension, the global economic order, and on that basis to offer an interpretation of its emerging implications for inequality both within the United States and across the world.
The optimal approach to unifocalisation in pulmonary atresia with ventricular septal defect and major aortopulmonary collateral arteries (pulmonary artery/ventricular septal defect/major aortopulmonary collaterals) remains controversial. Moreover, the impact of collateral vessel disease burden on surgical decision-making and late outcomes remains poorly defined. We investigated our centre’s experience in the surgical management of pulmonary artery/ventricular septal defect/major aortopulmonary collaterals.
Materials and methods
Between 1996 and 2015, 84 consecutive patients with pulmonary artery/ventricular septal defect/major aortopulmonary collaterals underwent unifocalisation. In all, 41 patients received single-stage unifocalisation (Group 1) and 43 patients underwent multi-stage repair (Group 2). Preoperative collateral vessel anatomy, branch pulmonary artery reinterventions, ventricular septal defect status, and late right ventricle/left ventricle pressure ratio were evaluated.
Results
Median follow-up was 4.8 compared with 5.7 years for Groups 1 and 2, respectively, p = 0.65. Median number of major aortopulmonary collaterals/patient was 3, ranging from 1 to 8, in Group 1 compared with 4, ranging from 1 to 8, in Group 2, p = 0.09. Group 2 had a higher number of lobar/segmental stenoses within collateral vessels (p = 0.02). Group 1 had fewer catheter-based branch pulmonary artery reinterventions, with 5 (inter-quartile range from 1 to 7) per patient, compared with 9 (inter-quartile range from 4 to 14) in Group 2, p = 0.009. Among patients who achieved ventricular septal defect closure, median right ventricle/left ventricle pressure was 0.48 in Group 1 compared with 0.78 in Group 2, p = 0.03. Overall mortality was 6 (17%) in Group 1 compared with 9 (21%) in Group 2.
Discussion
Single-stage unifocalisation is a promising repair strategy in select patients, achieving low rates of reintervention for branch pulmonary artery restenosis and excellent mid-term haemodynamic outcomes. However, specific anatomic substrates of pulmonary artery/ventricular septal defect/major aortopulmonary collaterals may be better suited to multi-stage repair. Preoperative evaluation of collateral vessel calibre and function may help inform more patient-specific surgical management.
Coinfection with human immunodeficiency virus (HIV) and viral hepatitis is associated with high morbidity and mortality in the absence of clinical management, making identification of these cases crucial. We examined characteristics of HIV and viral hepatitis coinfections by using surveillance data from 15 US states and two cities. Each jurisdiction used an automated deterministic matching method to link surveillance data for persons with reported acute and chronic hepatitis B virus (HBV) or hepatitis C virus (HCV) infections, to persons reported with HIV infection. Of the 504 398 persons living with diagnosed HIV infection at the end of 2014, 2.0% were coinfected with HBV and 6.7% were coinfected with HCV. Of the 269 884 persons ever reported with HBV, 5.2% were reported with HIV. Of the 1 093 050 persons ever reported with HCV, 4.3% were reported with HIV. A greater proportion of persons coinfected with HIV and HBV were males and blacks/African Americans, compared with those with HIV monoinfection. Persons who inject drugs represented a greater proportion of those coinfected with HIV and HCV, compared with those with HIV monoinfection. Matching HIV and viral hepatitis surveillance data highlights epidemiological characteristics of persons coinfected and can be used to routinely monitor health status and guide state and national public health interventions.
Compulsory admission can be experienced as devaluing and stigmatising by people with mental illness. Emotional reactions to involuntary hospitalisation and stigma-related stress may affect recovery, but longitudinal data are lacking. We, therefore, examined the impact of stigma-related emotional reactions and stigma stress on recovery over a 2-year period.
Method.
Shame and self-contempt as emotional reactions to involuntary hospitalisation, stigma stress, self-stigma and empowerment, as well as recovery were assessed among 186 individuals with serious mental illness and a history of recent involuntary hospitalisation.
Results.
More shame, self-contempt and stigma stress at baseline were correlated with increased self-stigma and reduced empowerment after 1 year. More stigma stress at baseline was associated with poor recovery after 2 years. In a longitudinal path analysis more stigma stress at baseline predicted poorer recovery after 2 years, mediated by decreased empowerment after 1 year, controlling for age, gender, symptoms and recovery at baseline.
Conclusion.
Stigma stress may have a lasting detrimental effect on recovery among people with mental illness and a history of involuntary hospitalisation. Anti-stigma interventions that reduce stigma stress and programs that enhance empowerment could improve recovery. Future research should test the effect of such interventions on recovery.
We present the first dedicated study into the phenomenon of ice sails. These are clean ice structures that protrude from the surface of a small number of debris-covered glaciers and can grow to heights of over 25 m. We draw together what is known about them from the academic/exploration literature and then analyse imagery. We show here that ice sails can develop by one of two mechanisms, both of which require clean ice to become surrounded by debris-covered ice, where the debris layer is shallow enough for the ice beneath it to melt faster than the clean ice. Once formed, ice sails can persist for decades, in an apparently steady state, before debris layer thickening eventually causes a reversal in the relative melt rates and the ice sails decay to merge back with the surrounding glacier surface. We support our image-based analysis with a surface energy-balance model and show that it compares well with available observations from Baltoro Glacier in the Karakoram. A sensitivity analysis of the model is performed and confirms the results from our empirical study that ice sails require a relatively high evaporative heat flux and/or a relatively low sensible heat flux in order to exist.
Dynamic ice-sheet models are used to assess the contribution of mass loss from the Greenland ice sheet to sea-level rise. Mass transfer from ice sheet to ocean is in a large part through outlet glaciers. Bed topography plays an important role in ice dynamics, since the acceleration from the slow-moving inland ice to an ice stream is in many cases caused by the existence of a subglacial trough or trough system. Problems are that most subglacial troughs are features of a scale not resolved in most ice-sheet models and that radar measurements of subglacial topography do not always reach the bottoms of narrow troughs. The trough-system algorithm introduced here employs mathematical morphology and algebraic topology to correctly represent subscale features in a topographic generalization, so the effects of troughs on ice flow are retained in ice-dynamic models. The algorithm is applied to derive a spatial elevation model of Greenland subglacial topography, integrating recently collected radar measurements (CReSIS data) of the Jakobshavn Isbræ, Helheim, Kangerdlussuaq and Petermann glacier regions. The resultant JakHelKanPet digital elevation model has been applied in dynamic ice-sheet modeling and sea-level-rise assessment.
In this paper we undertake a quantitative analysis of the dynamic process by which ice underneath a dry porous debris layer melts. We show that the incorporation of debris-layer airflow into a theoretical model of glacial melting can capture the empirically observed features of the so-called Østrem curve (a plot of the melt rate as a function of debris depth). Specifically, we show that the turning point in the Østrem curve can be caused by two distinct mechanisms: the increase in the proportion of ice that is debris-covered and/or a reduction in the evaporative heat flux as the debris layer thickens. This second effect causes an increased melt rate because the reduction in (latent) energy used for evaporation increases the amount of energy available for melting. Our model provides an explicit prediction for the melt rate and the temperature distribution within the debris layer, and provides insight into the relative importance of the two effects responsible for the maximum in the Østrem curve. We use the data of Nicholson and Benn (2006) to show that our model is consistent with existing empirical measurements.
So-called annual banding has been identified in a number of speleothems in which the number of bands approximates the time interval between successive U-series dates. The apparent annual resolution of speleothem records, however, remains largely untested. Here we statistically compare variations in band thickness from a late Holocene stalagmite in Carlsbad Cavern, Southern New Mexico, USA, with three independent tree-ring chronologies form the same region. We found no correspondence. Although there may be various explanations for the discordance, this limited exercise suggests that banded stalagmites should be held to the same rigorous standards in chronology building and climatic inference as annually resolved tree rings, corals, and ice cores.
Shared decision making has been advocated as a means to improve patient-orientation and quality of health care. There is a lack of knowledge on clinical decision making and its relation to outcome in the routine treatment of people with severe mental illness. This study examined preferred and experienced clinical decision making from the perspectives of patients and staff, and how these affect treatment outcome.
Methods.
“Clinical Decision Making and Outcome in Routine Care for People with Severe Mental Illness” (CEDAR; ISRCTN75841675) is a naturalistic prospective observational study with bimonthly assessments during a 12-month observation period. Between November 2009 and December 2010, adults with severe mental illness were consecutively recruited from caseloads of community mental health services at the six study sites (Ulm, Germany; London, UK; Naples, Italy; Debrecen, Hungary; Aalborg, Denmark; and Zurich, Switzerland). Clinical decision making was assessed using two instruments which both have parallel patient and staff versions: (a) The Clinical Decision Making Style Scale (CDMS) measured preferences for decision making at baseline; and (b) the Clinical Decision Making Involvement and Satisfaction Scale (CDIS) measured involvement and satisfaction with a specific decision at all time points. Primary outcome was patient-rated unmet needs measured with the Camberwell Assessment of Need Short Appraisal Schedule (CANSAS). Mixed-effects multinomial regression was used to examine differences and course over time in involvement in and satisfaction with actual decision making. The effect of clinical decision making on the primary outcome was examined using hierarchical linear modelling controlling for covariates (study centre, patient age, duration of illness, and diagnosis). Analysis were also controlled for nesting of patients within staff.
Results.
Of 708 individuals approached, 588 adults with severe mental illness (52% female, mean age = 41.7) gave informed consent. Paired staff participants (N = 213) were 61.8% female and 46.0 years old on average. Shared decision making was preferred by patients (χ2 = 135.08; p < 0.001) and staff (χ2 = 368.17; p < 0.001). Decision making style of staff significantly affected unmet needs over time, with unmet needs decreasing more in patients whose clinicians preferred active to passive (−0.406 unmet needs per two months, p = 0.007) or shared (−0.303 unmet needs per two months, p = 0.015) decision making.
Conclusions.
Decision making style of staff is a prime candidate for the development of targeted intervention. If proven effective in future trials, this would pave the ground for a shift from shared to active involvement of patients including changes to professional socialization through training in principles of active decision making.