We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A clinical tool to estimate the risk of treatment-resistant schizophrenia (TRS) in people with first-episode psychosis (FEP) would inform early detection of TRS and overcome the delay of up to 5 years in starting TRS medication.
Aims
To develop and evaluate a model that could predict the risk of TRS in routine clinical practice.
Method
We used data from two UK-based FEP cohorts (GAP and AESOP-10) to develop and internally validate a prognostic model that supports identification of patients at high-risk of TRS soon after FEP diagnosis. Using sociodemographic and clinical predictors, a model for predicting risk of TRS was developed based on penalised logistic regression, with missing data handled using multiple imputation. Internal validation was undertaken via bootstrapping, obtaining optimism-adjusted estimates of the model's performance. Interviews and focus groups with clinicians were conducted to establish clinically relevant risk thresholds and understand the acceptability and perceived utility of the model.
Results
We included seven factors in the prediction model that are predominantly assessed in clinical practice in patients with FEP. The model predicted treatment resistance among the 1081 patients with reasonable accuracy; the model's C-statistic was 0.727 (95% CI 0.723–0.732) prior to shrinkage and 0.687 after adjustment for optimism. Calibration was good (expected/observed ratio: 0.999; calibration-in-the-large: 0.000584) after adjustment for optimism.
Conclusions
We developed and internally validated a prediction model with reasonably good predictive metrics. Clinicians, patients and carers were involved in the development process. External validation of the tool is needed followed by co-design methodology to support implementation in early intervention services.
To compare the agreement and cost of two recall methods for estimating children’s minimum dietary diversity (MDD).
Design:
We assessed child’s dietary intake on two consecutive days: an observation on day one, followed by two recall methods (list-based recall and multiple-pass recall) administered in random order by different enumerators at two different times on day two. We compared the estimated MDD prevalence using survey-weighted linear probability models following a two one-sided test equivalence testing approach. We also estimated the cost-effectiveness of the two methods.
Setting:
Cambodia (Kampong Thom, Siem Reap, Battambang, and Pursat provinces) and Zambia (Chipata, Katete, Lundazi, Nyimba, and Petauke districts).
Participants:
Children aged 6–23 months: 636 in Cambodia and 608 in Zambia.
Results:
MDD estimations from both recall methods were equivalent to the observation in Cambodia but not in Zambia. Both methods were equivalent to the observation in capturing most food groups. Both methods were highly sensitive although the multiple-pass method accurately classified a higher proportion of children meeting MDD than the list-based method in both countries. Both methods were highly specific in Cambodia but moderately so in Zambia. Cost-effectiveness was better for the list-based recall method in both countries.
Conclusion:
The two recall methods estimated MDD and most other infant and young child feeding indicators equivalently in Cambodia but not in Zambia, compared to the observation. The list-based method produced slightly more accurate estimates of MDD at the population level, took less time to administer and was less costly to implement.
To meet the UK's greenhouse gas (GHG) emissions targets, the Climate Change Committee (CCC) recommended to reduce current meat and dairy intake by 20% by 2030. In this study, we modelled the impact of potential dietary changes on GHG emissions and water use with the selected scenarios based on the trend of food purchase and meat and dairy reduction policy. We show that implementing fiscal measures and facilitating innovations in production of meat alternatives would accelerate existing positive trends, help the UK reach the CCC 2030 target of 20% meat and dairy reduction and increase fruit and vegetable intake.
Technical Summary
We used 2001–2019 data from the Family Food module of the Living Costs and Food Survey (LCF), an annual UK survey of about 5,000 representative households recording quantities of all food and drink purchases, to model four 2030 dietary scenarios: Business as usual (BAU); two fiscal policy scenarios (‘fiscal 10%’ and ‘fiscal 20%’), combining either a 10% meat and dairy tax and a 10% fruit and vegetable subsidy, or a 20% tax and 20% subsidy on the same foods; and an ‘innovation scenario’ substituting traditionally-produced meat and dairy with plant-based analogues and animal proteins produced in laboratories. Compared to 2019 levels, we forecasted reductions in the range of 5–30% for meat and 8–32% for dairy across scenarios. Meat reductions could be up to 21.5% (fiscal20%) and 30.4% (innovation). For all scenarios we forecasted an increase in fruit and vegetables intake in the range of 3–13.5%; with the fiscal 20% scenario showing highest increases (13.5%). GHG emissions and water use reductions were highest for the innovation scenario (−19.8%, −16.2%) followed by fiscal 20% (−15.8%, −9.2%) fiscal 10% (−12.1%, 5.9%) and BAU (−8.3%, −2.6%) scenarios. Compared to average households, low-income households had similar patterns of change, but both past and predicted purchase of meat, fruit and vegetables and environmental footprints were lower.
Social Media Summary
Meat and dairy-reduction policies would help meet net zero targets and improve population health in the UK.
To establish the relationship between endoscope temperatures and luminosity with a variety of light source types, endoscope ages, endoscope sizes, angles and operative distance in transcanal endoscopic ear surgery.
Methods
Transcanal endoscopic ear surgery was simulated in an operating theatre using 7 mm plastic suction tubing coated in insulating tape. An ATP ET-959 thermometer was used to record temperatures, and a Trotec BF06 lux meter was used to measure luminosity. Luminosity and temperature recordings were taken at 0 mm and 5 mm from the endoscope tip.
Results
Thermal energy transfer from operating endoscopes is greatest when: the light intensity is high, there is a light-emitting diode light source and the endoscope is touching the surface. Additionally, larger-diameter endoscopes, angled endoscopes and new endoscopes generated greater heat.
Conclusion
It is recommended that operative light intensity is maintained at the lowest level possible, and that the surgeon avoids contact between patient tissues and the endoscope tip.
The coronavirus disease 2019 pandemic has led to a need for alternative teaching methods in facial plastics. This systematic review aimed to identify facial plastics simulation models, and assess their validity and efficacy as training tools.
Methods
Literature searches were performed. The Beckman scale was used for validity. The McGaghie Modified Translational Outcomes of Simulation-Based Mastery Learning score was used to evaluate effectiveness.
Results
Overall, 29 studies were selected. These simulated local skin flaps (n = 9), microtia frameworks (n = 5), pinnaplasty (n = 1), facial nerve anastomosis (n = 1), oculoplastic procedures (n = 5), and endoscopic septoplasty and septorhinoplasty simulators (n = 10). Of these models, 14 were deemed to be high-fidelity, 13 low-fidelity and 2 mixed-fidelity. None of the studies published common outcome measures.
Conclusion
Simulators in facial plastic surgical training are important. These models may have some training benefits, but most could benefit from further assessment of validity.
To identify patient and provider characteristics associated with high-volume antibiotic prescribing for children in Tennessee, a state with high antibiotic utilization.
Design:
Cross-sectional, retrospective analysis of pediatric (aged <20 years) outpatient antibiotic prescriptions in Tennessee using the 2016 IQVIA Xponent (formerly QuintilesIMS) database.
Methods:
Patient and provider characteristics, including county of prescription fill, rural versus urban county classification, patient age group, provider type (nurse practitioner, physician assistant, physician, or dentist), physician specialty, and physician years of practice were analyzed.
Results:
Tennessee providers wrote 1,940,011 pediatric outpatient antibiotic prescriptions yielding an antibiotic prescribing rate of 1,165 per 1,000 population, 50% higher than the national pediatric antibiotic prescribing rate. Mean antibiotic prescribing rates varied greatly by county (range, 39–2,482 prescriptions per 1,000 population). Physicians wrote the greatest number of antibiotic prescriptions (1,043,030 prescriptions, 54%) of which 56% were written by general pediatricians. Pediatricians graduating from medical school prior to 2000 were significantly more likely than those graduating after 2000 to be high antibiotic prescribers. Overall, 360 providers (1.7% of the 21,798 total providers in this dataset) were responsible for nearly 25% of both overall and broad-spectrum antibiotic prescriptions; 20% of these providers practiced in a single county.
Conclusions:
Fewer than 2% of providers account for 25% of pediatric antibiotic prescriptions. High antibiotic prescribing for children in Tennessee is associated with specific patient and provider characteristics that can be used to design stewardship interventions targeted to the highest prescribing providers in specific counties and specialties.
Prescribers who wrote at least 1 antibiotic prescription filled at a retail pharmacy in Tennessee in 2016.
Methods:
Multivariable logistic regression, including prescriber gender, birth decade, specialty, and practice location, and patient gender and age group, to determine the association with high prescribing.
Results:
In 2016, 7,949,816 outpatient oral antibiotic prescriptions were filled in Tennessee: 1,195 prescriptions per 1,000 total population. Moreover, 50% of Tennessee’s outpatient oral antibiotic prescriptions were written by 9.3% of prescribers. Specific specialties and prescriber types were associated with high prescribing: urology (odds ratio [OR], 3.249; 95% confidence interval [CI], 3.208–3.289), nurse practitioners (OR, 2.675; 95% CI, 2.658–2.692), dermatologists (OR, 2.396; 95% CI, 2.365–2.428), physician assistants (OR, 2.382; 95% CI, 2.364–2.400), and pediatric physicians (OR, 2.340; 95% CI, 2.320–2.361). Prescribers born in the 1960s were most likely to be high prescribers (OR, 2.574; 95% CI, 2.532–2.618). Prescribers in rural areas were more likely than prescribers in all other practice locations to be high prescribers. High prescribers were more likely to prescribe broader-spectrum antibiotics (P < .001).
Conclusions:
Targeting high prescribers, independent of specialty, degree, practice location, age, or gender, may be the best strategy for implementing cost-conscious, effective outpatient antimicrobial stewardship interventions. More information about high prescribers, such as patient volumes, clinical scope, and specific barriers to intervention, is needed.
Comparing the feasibility of ovine and synthetic temporal bones for simulating endoscopic ear surgery against the ‘gold standard’ of human cadaveric tissue.
Methods
A total of 10 candidates (5 trainees and 5 experts) performed endoscopic tympanoplasty on 3 models: Pettigrew temporal bones, ovine temporal bones and cadaveric temporal bones. Candidates completed a questionnaire assessing the face validity, global content validity and task-specific content validity of each model.
Results
Regarding ovine temporal bone validity, the median values were 4 (interquartile range = 4–4) for face validity, 4 (interquartile range = 4–4) for global content validity and 4 (interquartile range = 4–4) for task-specific content validity. For the Pettigrew temporal bone, the median values were 3.5 (interquartile range = 2.25–4) for face validity, 3 (interquartile range = 2.75–3) for global content validity and 3 (interquartile range = 2.5–3) for task-specific content validity. The ovine temporal bone was considered significantly superior to the Pettigrew temporal bone for the majority of validity categories assessed.
Conclusion
Tympanoplasty is feasible in both the ovine temporal bone and the Pettigrew temporal bone. However, the ovine model was a significantly more realistic simulation tool.
The completion of a laser safety course remains a core surgical curriculum requirement for otolaryngologists training in the UK. This project aimed to develop a comprehensive laser safety course utilising both technical and non-technical skills simulation.
Methods
Otolaryngology trainees and consultants from the West of Scotland Deanery attended a 1-day course comprising lectures, two high-fidelity simulation scenarios and a technical simulation of safe laser use in practice.
Results
The course, and in particular the use of simulation training, received excellent feedback from otolaryngology trainees and consultants who participated. Both simulation scenarios were validated for future use in laser simulation.
Conclusion
The course has been recognised as a laser safety course sufficient for the otolaryngology Certificate of Completion of Training. To the authors’ knowledge, this article represents the first description of using in situ non-technical skills simulation training for teaching laser use in otolaryngology.
There is evidence that some countries negotiate trade agreements during economic downturns. Why would a leader do this? We argue that political leaders can gain from such agreements because of the signals they send to their public. The public are less likely to blame leaders for adverse economic conditions when they have implemented sound economic policies, such as signing agreements designed to liberalize trade and prevent a slide into protectionism. In hard economic times, leaders – especially those in democratic environments – may find that trade agreements are a useful way to reassure the public. Since majorities in many countries around the world view trade favorably, leaders may see agreements that prevent them from adopting protectionism as a way to maintain support. We evaluate this argument by analyzing preferential trade agreements (PTAs) formed since 1962. We find that, on average, democratic countries are more likely to form PTAs during hard economic times. We also find that democratic leaders who sign PTAs during downturns enjoy a longer tenure than their counterparts who do not sign such agreements.
Dietary patterns analysis is an emerging area of research. Identifying distinct patterns within a large dietary survey can give a more accurate representation of what people are eating. Furthermore, it allows researchers to analyse relationships between non-communicable diseases (NCD) and complete diets rather than individual food items or nutrients. However, few such studies have been conducted in developing countries including India, where the population has a high burden of diabetes and CVD. We undertook a systematic review of published and grey literature exploring dietary patterns and relationships with diet-related NCD in India. We identified eight studies, including eleven separate models of dietary patterns. Most dietary patterns were vegetarian with a predominance of fruit, vegetables and pulses, as well as cereals; dietary patterns based on high-fat, high-sugar foods and more meat were also identified. There was large variability between regions in dietary patterns, and there was some evidence of change in diets over time, although no evidence of different diets by sex or age was found. Consumers of high-fat dietary patterns were more likely to have greater BMI, and a dietary pattern high in sweets and snacks was associated with greater risk of diabetes compared with a traditional diet high in rice and pulses, but other relationships with NCD risk factors were less clear. This review shows that dietary pattern analyses can be highly valuable in assessing variability in national diets and diet–disease relationships. However, to date, most studies in India are limited by data and methodological shortcomings.
Previous research has shown that those employed in certain occupations, such as doctors and farmers, have an elevated risk of suicide, yet little research has sought to synthesise these findings across working-age populations.
Aims
To summarise published research in this area through systematic review and meta-analysis.
Method
Random effects meta-analyses were used to calculate a pooled risk of suicide across occupational skill-level groups.
Results
Thirty-four studies were included in the meta-analysis. Elementary professions (e.g. labourers and cleaners) were at elevated risk compared with the working-age population (rate ratio (RR) = 1.84, 95% CI 1.46–2.33), followed by machine operators and deck crew (RR = 1.78, 95% CI 1.22–2.60) and agricultural workers (RR = 1.64, 95% CI 1.19–2.28). Results suggested a stepwise gradient in risk, with the lowest skilled occupations being at greater risk of suicide than the highest skill-level group.
Conclusions
This is the first comprehensive meta-analytical review of suicide and occupation. There is a need for future studies to investigate explanations for the observed skill-level differences, particularly in people employed in lower skill-level groups.
There are ongoing questions about whether unemployment has causal effects on suicide as this relationship may be confounded by past experiences of mental illness. The present review quantified the effects of adjustment for mental health on the relationship between unemployment and suicide. Findings were used to develop and interpret likely causal models of unemployment, mental health and suicide.
Method
A random-effects meta-analysis was conducted on five population-based cohort studies where temporal relationships could be clearly ascertained.
Results
Results of the meta-analysis showed that unemployment was associated with a significantly higher relative risk (RR) of suicide before adjustment for prior mental health [RR 1.58, 95% confidence interval (CI) 1.33–1.83]. After controlling for mental health, the RR of suicide following unemployment was reduced by approximately 37% (RR 1.15, 95% CI 1.00–1.30). Greater exposure to unemployment was associated with higher RR of suicide, and the pooled RR was higher for males than for females.
Conclusions
Plausible interpretations of likely pathways between unemployment and suicide are complex and difficult to validate given the poor delineation of associations over time and analytic rationale for confounder adjustment evident in the revised literature. Future research would be strengthened by explicit articulation of temporal relationships and causal assumptions. This would be complemented by longitudinal study designs suitable to assess potential confounders, mediators and effect modifiers influencing the relationship between unemployment and suicide.
A year-long intervention trial was conducted to characterise the responses of multiple biomarkers of Se status in healthy American adults to supplemental selenomethionine (SeMet) and to identify factors affecting those responses. A total of 261 men and women were randomised to four doses of Se (0, 50, 100 or 200 μg/d as l-SeMet) for 12 months. Responses of several biomarkers of Se status (plasma Se, serum selenoprotein P (SEPP1), plasma glutathione peroxidase activity (GPX3), buccal cell Se, urinary Se) were determined relative to genotype of four selenoproteins (GPX1, GPX3, SEPP1, selenoprotein 15), dietary Se intake and parameters of single-carbon metabolism. Results showed that supplemental SeMet did not affect GPX3 activity or SEPP1 concentration, but produced significant, dose-dependent increases in the Se contents of plasma, urine and buccal cells, each of which plateaued by 9–12 months and was linearly related to effective Se dose (μg/d per kg0·75). The increase in urinary Se excretion was greater for women than men, and for individuals of the GPX1 679 T/T genotype than for those of the GPX1 679 C/C genotype. It is concluded that the most responsive Se-biomarkers in this non-deficient cohort were those related to body Se pools: plasma, buccal cell and urinary Se concentrations. Changes in plasma Se resulted from increases in its non-specific component and were affected by both sex and GPX1 genotype. In a cohort of relatively high Se status, the Se intake (as SeMet) required to support plasma Se concentration at a target level (Sepl-target) is: .
International standards require the use of a weighted least-squares approach to onboard Receiver Autonomous Integrity Monitoring (RAIM). However, the protection levels developed to determine if the conditions exist to perform a measurement check (i.e. failure detection) are not specified. Various methods for the computation of protection levels exist. However, they are essentially approximations to the complex problem of computing the worst-case missed detection probability under a weighted system. In this paper, a novel approach to determine this probability at the worst-case measurement bias is presented. The missed detection probabilities are then iteratively solved against the integrity risk requirement in order to derive an optimal protection level for the operation. It is shown that the new method improves availability by more than 30% compared to the baseline weighted RAIM algorithm.
A version of this paper was first presented at the US Institute of Navigation (ION) GNSS 2009 Conference in Savannah, Georgia.
What type of implement was used to cut and move earth in prehistory? In the Mississippian culture at least, the key tool was the stone hoe – formed from a chert blade strapped to a handle. These blades were hoarded and depicted in use, leaving little doubt that they were for digging, in the service of agriculture and extracting earth for building. Drawing on a series of controlled experiments, the authors deduce the capabilities and biographies of the stone hoes, evoking the admirable efforts of the people who constructed the massive mounds of Cahokia.