We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper describes the implementation of curricula for Liberia's first-ever psychiatry training programme in 2019 and the actions of the only two Liberian psychiatrists in the country at the time in developing and executing a first-year postgraduate psychiatry training programme (i.e. residency) with support from international collaborators. It explores cultural differences in training models among collaborators and strategies to synergise them best. It highlights the assessment of trainees’ (residents’) basic knowledge on entry into the programme and how it guided immediate and short-term priority teaching objectives, including integrated training in neuroscience and neurology. The paper describes the strengths and challenges of this approach as well as opportunities for continued growth.
Recent evidence from case reports suggests that a ketogenic diet may be effective for bipolar disorder. However, no clinical trials have been conducted to date.
Aims
To assess the recruitment and feasibility of a ketogenic diet intervention in bipolar disorder.
Method
Euthymic individuals with bipolar disorder were recruited to a 6–8 week trial of a modified ketogenic diet, and a range of clinical, economic and functional outcome measures were assessed. Study registration number: ISRCTN61613198.
Results
Of 27 recruited participants, 26 commenced and 20 completed the modified ketogenic diet for 6–8 weeks. The outcomes data-set was 95% complete for daily ketone measures, 95% complete for daily glucose measures and 95% complete for daily ecological momentary assessment of symptoms during the intervention period. Mean daily blood ketone readings were 1.3 mmol/L (s.d. = 0.77, median = 1.1) during the intervention period, and 91% of all readings indicated ketosis, suggesting a high degree of adherence to the diet. Over 91% of daily blood glucose readings were within normal range, with 9% indicating mild hypoglycaemia. Eleven minor adverse events were recorded, including fatigue, constipation, drowsiness and hunger. One serious adverse event was reported (euglycemic ketoacidosis in a participant taking SGLT2-inhibitor medication).
Conclusions
The recruitment and retention of euthymic individuals with bipolar disorder to a 6–8 week ketogenic diet intervention was feasible, with high completion rates for outcome measures. The majority of participants reached and maintained ketosis, and adverse events were generally mild and modifiable. A future randomised controlled trial is now warranted.
We analyzed efficacy of a centralized surveillance infection prevention (CSIP) program in a healthcare system on healthcare-associated infection (HAI) rates amid the coronavirus disease 2019 (COVID-19) pandemic. HAI rates were variable in CSIP and non-CSIP facilities. Central-line–associated bloodstream infection (CLABSI), C. difficile infection (CSI), and surgical-site infection (SSI) rates were negatively correlated with COVID-19 intensity in CSIP facilities.
To develop, implement, and evaluate the effectiveness of a unique centralized surveillance infection prevention (CSIP) program.
Design:
Observational quality improvement project.
Setting:
An integrated academic healthcare system.
Intervention:
The CSIP program comprises senior infection preventionists who are responsible for healthcare-associated infection (HAI) surveillance and reporting, allowing local infection preventionists (LIPs) a greater portion of their time to non-surveillance patient safety activities. Four CSIP team members accrued HAI responsibilities at 8 facilities.
Methods:
We evaluated the effectiveness of the CSIP program using 4 measures: recovery of LIP time, efficiency of surveillance activities by LIPs and CSIP staff, surveys characterizing LIP perception of their effectiveness in HAI reduction, and nursing leaders’ perception of LIP effectiveness.
Results:
The amount of time spent by LIP teams on HAI surveillance was highly variable, while CSIP time commitment and efficiency was steady. Post-CSIP implementation, 76.9% of LIPs agreed that they spend adequate time on inpatient units, compared to 15.4% pre-CSIP; LIPs also reported more time to allot to non-surveillance activities. Nursing leaders reported greater satisfaction with LIP involvement with HAI reduction practices.
Conclusion:
CSIP programs are a little-reported strategy to ease burden on LIPs with reallocation of HAI surveillance. The analyses presented here will aid health systems in anticipating the benefit of CSIP programs.
Flight crews’ capacity to conduct take-off and landing in near zero visibility conditions has been partially addressed by advanced surveillance and cockpit display technology. This capability is yet to be realised within the context of manoeuvring aircraft within airport terminal areas. In this paper the performance and workload benefits of user-centre designed visual and haptic taxi navigational cues, presented via a head-up display (HUD) and active sidestick, respectively, were evaluated in simulated taxiing trials by 12 professional pilots. In addition, the trials sought to examine pilot acceptance of side stick nose wheel steering. The HUD navigational cues demonstrated a significant task-specific benefit by reducing centreline deviation during turns and the frequency of major taxiway deviations. In parallel, the visual cues reduced self-report workload. Pilot’s appraisal of nose wheel steering by sidestick was positive, and active sidestick cues increased confidence in the multimodal guidance construct. The study presents the first examination of how a multimodal display, combining visual and haptic cues, could support the safety and efficiency in which pilots are able to conduct a taxi navigation task in low-visibility conditions.
The effect of transport on core and peripheral body temperatures and heart rate was assessed in ten 18-month-old Coopworth ewes (Ovis aries) Manual recordings of core (rectal) temperatures were obtained, and automated logging of peripheral (external auditory canal and pinna) temperatures and heart rate was carried out on the day prior to (day 1) and during (day 2) a standardised transport procedure. Transport produced a significant increase in the rectal temperature, which declined following unloading. Peripheral measures of body temperature also exhibited changes with transport. However, both ear-canal and pinna temperatures declined during actual transport, reflecting to some extent the decline in ambient temperatures recorded externally by sensors on the ear tags of the animals. Peripheral measurement of temperature, particularly at the readily accessible ear canal, may offer potential as a technique for the long-term monitoring of thermal responses to stress. However, further research is required into the potentially confounding effects of ambient temperature and wind chill factors.
In Britain large numbers of animals are taken into captivity for treatment or care and then subsequently returned to the wild, but there are few data on the effectiveness of these rehabilitation programmes. In this study, over a period of four years 251 fox cubs that had been captive-reared were tagged and released; 90 were recovered. Survival rates were low, and road traffic accidents were found to be a major cause of mortality immediately following release. Recovery distances were lower than expected. The stress associated with captive-rearing meant that released foxes weighed less than wild-reared foxes, and they suffered further weight loss in the period immediately following release, even though an analysis of the stomach contents of animals recovered dead showed that released foxes rapidly learnt to hunt successfully.
It was concluded that captive-rearing is a problematic process for foxes, and contrary to predictions they face severe problems in adapting following release. Suggestions are made for the improvement of fox captive-rearing and release programmes, and the need for similar studies on other species is highlighted.
The release of animals from captivity frequently leads to a period of erratic movement behaviour which is thought to expose the animal to a high risk of mortality. Twenty-six foxes which had been reared at a wildlife hospital or captive-bred, were radio-collared when nearly full-grown and released without site acclimation. Immediately after release there was an erratic phase of behaviour, during which the foxes travelled widely and movement parameters were markedly elevated. For those foxes which survived, a second phase was entered after an average of 17.2 days, during which one small area only was used, and movement parameters were much reduced. In a second study, nine foxes were released following site acclimation in a pre-release pen; this process postponed but did not eliminate the phase of high movement activity.
This pattern of movement was compared with the dispersal behaviour of wild-reared foxes. It was concluded that released foxes, despite being proficient in other aspects of behaviour, were moving and behaving in a markedly abnormal manner and this resulted in a high death rate. The results are used to discuss methods of improving rehabilitation techniques.
Childhood adversities (CAs) predict heightened risks of posttraumatic stress disorder (PTSD) and major depressive episode (MDE) among people exposed to adult traumatic events. Identifying which CAs put individuals at greatest risk for these adverse posttraumatic neuropsychiatric sequelae (APNS) is important for targeting prevention interventions.
Methods
Data came from n = 999 patients ages 18–75 presenting to 29 U.S. emergency departments after a motor vehicle collision (MVC) and followed for 3 months, the amount of time traditionally used to define chronic PTSD, in the Advancing Understanding of Recovery After Trauma (AURORA) study. Six CA types were self-reported at baseline: physical abuse, sexual abuse, emotional abuse, physical neglect, emotional neglect and bullying. Both dichotomous measures of ever experiencing each CA type and numeric measures of exposure frequency were included in the analysis. Risk ratios (RRs) of these CA measures as well as complex interactions among these measures were examined as predictors of APNS 3 months post-MVC. APNS was defined as meeting self-reported criteria for either PTSD based on the PTSD Checklist for DSM-5 and/or MDE based on the PROMIS Depression Short-Form 8b. We controlled for pre-MVC lifetime histories of PTSD and MDE. We also examined mediating effects through peritraumatic symptoms assessed in the emergency department and PTSD and MDE assessed in 2-week and 8-week follow-up surveys. Analyses were carried out with robust Poisson regression models.
Results
Most participants (90.9%) reported at least rarely having experienced some CA. Ever experiencing each CA other than emotional neglect was univariably associated with 3-month APNS (RRs = 1.31–1.60). Each CA frequency was also univariably associated with 3-month APNS (RRs = 1.65–2.45). In multivariable models, joint associations of CAs with 3-month APNS were additive, with frequency of emotional abuse (RR = 2.03; 95% CI = 1.43–2.87) and bullying (RR = 1.44; 95% CI = 0.99–2.10) being the strongest predictors. Control variable analyses found that these associations were largely explained by pre-MVC histories of PTSD and MDE.
Conclusions
Although individuals who experience frequent emotional abuse and bullying in childhood have a heightened risk of experiencing APNS after an adult MVC, these associations are largely mediated by prior histories of PTSD and MDE.
Advanced malignant neoplasms of the larynx and hypopharynx pose many therapeutic challenges. Total pharyngolaryngectomy and total laryngectomy provide an opportunity to cure these tumours but are associated with significant morbidity. Reconstruction of the pharyngeal defect following total pharyngolaryngectomy demands careful consideration and remains an area of debate within surgical discussions.
Methods
This paper describes a systemic analysis of pharyngeal reconstruction following total pharyngolaryngectomy and total laryngectomy, leveraging data collected over a 20-year period at a large tertiary referral centre.
Results
Analysing 155 patients, the results show that circumferential pharyngeal defects and prior radiotherapy have a significant impact on surgical complications. In addition, free tissue transfer in larger pharyngeal defects showed lower rates of post-operative anastomosis leak and stricture.
Conclusion
Pharyngeal resection carries a substantial risk of post-operative complications, and free tissue transfer appears to be an effective means of reconstruction for circumferential defects.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
This SHEA white paper identifies knowledge gaps and challenges in healthcare epidemiology research related to coronavirus disease 2019 (COVID-19) with a focus on core principles of healthcare epidemiology. These gaps, revealed during the worst phases of the COVID-19 pandemic, are described in 10 sections: epidemiology, outbreak investigation, surveillance, isolation precaution practices, personal protective equipment (PPE), environmental contamination and disinfection, drug and supply shortages, antimicrobial stewardship, healthcare personnel (HCP) occupational safety, and return to work policies. Each section highlights three critical healthcare epidemiology research questions with detailed description provided in supplementary materials. This research agenda calls for translational studies from laboratory-based basic science research to well-designed, large-scale studies and health outcomes research. Research gaps and challenges related to nursing homes and social disparities are included. Collaborations across various disciplines, expertise and across diverse geographic locations will be critical.
The Clinical and Translational Science Awards (CTSA) Consortium, a network of academic health care institutions with CTSA hubs, is charged with improving the national clinical and translational research enterprise. The CTSA Consortium and the NIH National Center for Advancing Translational Sciences implemented the Common Metrics Initiative comprised of standardized metrics and a shared performance improvement framework. This article summarizes hubs’ perspectives on its value during the initial implementation.
Methods:
The value was assessed across 58 hubs. Survey items assessed change in perceived ability to manage performance and advance clinical and translational science. Semi-structured interviews elicited hubs’ perspectives on meaningfulness and value-added of the Common Metrics Initiative and hubs’ recommendations.
Results:
Hubs considered their abilities to manage performance to have improved, but there was no change in perceived ability to advance clinical and translational science. The initiative added value by providing a formal structured process, enabling strategic conversations, facilitating improvements in processes, providing an external impetus for improvement, and providing justification for funds invested. Hubs were concerned about the usefulness of the metrics chosen and whether the value-added was sufficient relative to the effort required. Hubs recommended useful benchmarking, disseminating best practices and promoting peer-to-peer learning, and expanding the use of data to inform the initiative.
Conclusions:
Implementing Common Metrics and a performance improvement framework yielded concrete short-term benefits, but concerns about usefulness remained, particularly considering the effort required. The Common Metrics Initiative should focus on facilitating cross-hub collaboration around metrics that address high-priority impact areas for individual hubs and the Consortium.
The Common Metrics Initiative aims to develop and field metrics to improve research processes within the national Clinical and Translational Science Award (CTSA) Consortium. A Median Accrual Ratio (MAR) common metric was developed to assess the results of efforts to increase subject accrual into a set of clinical trials within the expected time period. A pilot test of the MAR was undertaken at Tufts Clinical and Translational Science Institute (CTSI) with eight CTSA Consortium hubs. Post-pilot interviews were conducted with 9 CTSA Principal Investigators (PIs) and 23 pilot team members. Over three-quarters (78%) of respondents reported that the MAR could be useful for performance improvement, but also described limitations or concerns. The most commonly cited barrier to MAR use for performance improvement was difficulty in interpreting the single value that is produced. Most respondents were interested in using the MAR to assess recruitment at an individual trial level. Majority of respondents (63%) had mixed opinions about aggregating metric results across the CTSA Consortium for comparison or benchmarking. Collecting data about additional contextual factors, and comparing accrual between subgroups, were cited as potentially helping address concerns about aggregation. Significant challenges remain in ensuring that the MAR can be sufficiently useful for collaborative process improvement. We offer recommendations to potentially improve metric usefulness.
Epidemiological studies indicate that individuals with one type of mental disorder have an increased risk of subsequently developing other types of mental disorders. This study aimed to undertake a comprehensive analysis of pair-wise lifetime comorbidity across a range of common mental disorders based on a diverse range of population-based surveys.
Methods
The WHO World Mental Health (WMH) surveys assessed 145 990 adult respondents from 27 countries. Based on retrospectively-reported age-of-onset for 24 DSM-IV mental disorders, associations were examined between all 548 logically possible temporally-ordered disorder pairs. Overall and time-dependent hazard ratios (HRs) and 95% confidence intervals (CIs) were calculated using Cox proportional hazards models. Absolute risks were estimated using the product-limit method. Estimates were generated separately for men and women.
Results
Each prior lifetime mental disorder was associated with an increased risk of subsequent first onset of each other disorder. The median HR was 12.1 (mean = 14.4; range 5.2–110.8, interquartile range = 6.0–19.4). The HRs were most prominent between closely-related mental disorder types and in the first 1–2 years after the onset of the prior disorder. Although HRs declined with time since prior disorder, significantly elevated risk of subsequent comorbidity persisted for at least 15 years. Appreciable absolute risks of secondary disorders were found over time for many pairs.
Conclusions
Survey data from a range of sites confirms that comorbidity between mental disorders is common. Understanding the risks of temporally secondary disorders may help design practical programs for primary prevention of secondary disorders.
The Clinical and Translational Science Awards (CTSA) Consortium, about 60 National Institutes of Health (NIH)-supported CTSA hubs at academic health care institutions nationwide, is charged with improving the clinical and translational research enterprise. Together with the NIH National Center for Advancing Translational Sciences (NCATS), the Consortium implemented Common Metrics and a shared performance improvement framework.
Methods:
Initial implementation across hubs was assessed using quantitative and qualitative methods over a 19-month period. The primary outcome was implementation of three Common Metrics and the performance improvement framework. Challenges and facilitators were elicited.
Results:
Among 59 hubs with data, all began implementing Common Metrics, but about one-third had completed all activities for three metrics within the study period. The vast majority of hubs computed metric results and undertook activities to understand performance. Differences in completion appeared in developing and carrying out performance improvement plans. Seven key factors affected progress: hub size and resources, hub prior experience with performance management, alignment of local context with needs of the Common Metrics implementation, hub authority in the local institutional structure, hub engagement (including CTSA Principal Investigator involvement), stakeholder engagement, and attending training and coaching.
Conclusions:
Implementing Common Metrics and performance improvement in a large network of research-focused organizations proved feasible but required substantial time and resources. Considerable heterogeneity across hubs in data systems, existing processes and personnel, organizational structures, and local priorities of home institutions created disparate experiences across hubs. Future metric-based performance management initiatives across heterogeneous local contexts should anticipate and account for these types of differences.
Shared patient–clinician decision-making is central to choosing between medical treatments. Decision support tools can have an important role to play in these decisions. We developed a decision support tool for deciding between nonsurgical treatment and surgical total knee replacement for patients with severe knee osteoarthritis. The tool aims to provide likely outcomes of alternative treatments based on predictive models using patient-specific characteristics. To make those models relevant to patients with knee osteoarthritis and their clinicians, we involved patients, family members, patient advocates, clinicians, and researchers as stakeholders in creating the models.
Methods:
Stakeholders were recruited through local arthritis research, advocacy, and clinical organizations. After being provided with brief methodological education sessions, stakeholder views were solicited through quarterly patient or clinician stakeholder panel meetings and incorporated into all aspects of the project.
Results:
Participating in each aspect of the research from determining the outcomes of interest to providing input on the design of the user interface displaying outcome predications, 86% (12/14) of stakeholders remained engaged throughout the project. Stakeholder engagement ensured that the prediction models that form the basis of the Knee Osteoarthritis Mathematical Equipoise Tool and its user interface were relevant for patient–clinician shared decision-making.
Conclusions:
Methodological research has the opportunity to benefit from stakeholder engagement by ensuring that the perspectives of those most impacted by the results are involved in study design and conduct. While additional planning and investments in maintaining stakeholder knowledge and trust may be needed, they are offset by the valuable insights gained.
To determine sociodemographic factors associated with occupational, recreational and firearm-related noise exposure.
Methods
This nationally representative, multistage, stratified, cluster cross-sectional study sampled eligible National Health and Nutrition Examination Survey participants aged 20–69 years (n = 4675) about exposure to occupational and recreational noise and recurrent firearm usage, using a weighted multivariate logistic regression analysis.
Results
Thirty-four per cent of participants had exposure to occupational noise and 12 per cent to recreational noise, and 13 per cent repeatedly used firearms. Males were more likely than females to have exposure to all three noise types (adjusted odds ratio range = 2.63–14.09). Hispanics and Asians were less likely to have exposure to the three noise types than Whites. Blacks were less likely than Whites to have occupational and recurrent firearm noise exposure. Those with insurance were 26 per cent less likely to have exposure to occupational noise than those without insurance (adjusted odds ratio = 0.74, 95 per cent confidence interval = 0.60–0.93).
Conclusion
Whites, males and uninsured people are more likely to have exposure to potentially hazardous loud noise.