We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Managing clinical trials is a complex process requiring careful integration of human, technology, compliance, and operations for success. We collaborated with experts to develop a multi-axial Clinical Trials Management Ecosystem (CTME) maturity model (MM) to help institutions identify best practices for CTME capabilities.
Methods:
A working group of research informaticists was established. An online session on maturity models was hosted, followed by a review of the candidate domain axes and finalization of the axes. Next, maturity level attributes were defined for min/max levels (level 1 and level 5) for each axis of the CTME MM, followed by the intermediate levels. A REDCap survey comprising the model’s statements was then created, and a subset of working group members tested the model by completing it at their respective institutions. The finalized survey was distributed to all working group members.
Results:
We developed a CTME MM comprising five maturity levels across 11 axes: study management, regulatory and audit management, financial management, investigational product management, subject identification and recruitment, subject management, data, reporting analytics & dashboard, system integration and interfaces, staff training & personnel management, and organizational maturity and culture. Informaticists at 22 Clinical and Translational Science Award hubs and one other organization self-assessed their institutional CTME maturity. Respondents reported relatively high maturity for study management and investigational product management. The reporting analytics & dashboard axis was the least mature.
Conclusion:
The CTME MM provides a framework to research organizations to evaluate their current clinical trials management maturity across 11 axes and identify areas for future growth.
While past research suggested that living arrangements are associated with suicide death, no study has examined the impact of sustained living arrangements and the change in living arrangements. Also, previous survival analysis studies only reported a single hazard ratio (HR), whereas the actual HR may change over time. We aimed to address these limitations using causal inference approaches.
Methods
Multi-point data from a general Japanese population sample were used. Participants reported their living arrangements twice within a 5-year time interval. After that, suicide death, non-suicide death and all-cause mortality were evaluated over 14 years. We used inverse probability weighted pooled logistic regression and cumulative incidence curve, evaluating the association of time-varying living arrangements with suicide death. We also studied non-suicide death and all-cause mortality to contextualize the association. Missing data for covariates were handled using random forest imputation.
Results
A total of 86,749 participants were analysed, with a mean age (standard deviation) of 51.7 (7.90) at baseline. Of these, 306 died by suicide during the 14-year follow-up. Persistently living alone was associated with an increased risk of suicide death (risk difference [RD]: 1.1%, 95% confidence interval [CI]: 0.3–2.5%; risk ratio [RR]: 4.00, 95% CI: 1.83–7.41), non-suicide death (RD: 7.8%, 95% CI: 5.2–10.5%; RR: 1.56, 95% CI: 1.38–1.74) and all-cause mortality (RD: 8.7%, 95% CI: 6.2–11.3%; RR: 1.60, 95% CI: 1.42–1.79) at the end of the follow-up. The cumulative incidence curve showed that these associations were consistent throughout the follow-up. Across all types of mortality, the increased risk was smaller for those who started to live with someone and those who transitioned to living alone. The results remained robust in sensitivity analyses.
Conclusions
Individuals who persistently live alone have an increased risk of suicide death as well as non-suicide death and all-cause mortality, whereas this impact is weaker for those who change their living arrangements.
This study aimed to parse between-person heterogeneity in growth of impulsivity across childhood and adolescence among participants enrolled in five childhood preventive intervention trials targeting conduct problems. In addition, we aimed to test profile membership in relation to adult psychopathologies. Measurement items representing impulsive behavior across grades 2, 4, 5, 7, 8, and 10, and aggression, substance use, suicidal ideation/attempts, and anxiety/depression in adulthood were integrated from the five trials (N = 4,975). We applied latent class growth analysis to this sample, as well as samples separated into nonintervention (n = 2,492) and intervention (n = 2,483) participants. Across all samples, profiles were characterized by high, moderate, low, and low-increasing impulsive levels. Regarding adult outcomes, in all samples, the high, moderate, and low profiles endorsed greater levels of aggression compared to the low-increasing profile. There were nuanced differences across samples and profiles on suicidal ideation/attempts and anxiety/depression. Across samples, there were no significant differences between profiles on substance use. Overall, our study helps to inform understanding of the developmental course and prognosis of impulsivity, as well as adding to collaborative efforts linking data across multiple studies to better inform understanding of developmental processes.
Integrating social and environmental determinants of health (SEDoH) into enterprise-wide clinical workflows and decision-making is one of the most important and challenging aspects of improving health equity. We engaged domain experts to develop a SEDoH informatics maturity model (SIMM) to help guide organizations to address technical, operational, and policy gaps.
Methods:
We established a core expert group consisting of developers, informaticists, and subject matter experts to identify different SIMM domains and define maturity levels. The candidate model (v0.9) was evaluated by 15 informaticists at a Center for Data to Health community meeting. After incorporating feedback, a second evaluation round for v1.0 collected feedback and self-assessments from 35 respondents from the National COVID Cohort Collaborative, the Center for Leading Innovation and Collaboration’s Informatics Enterprise Committee, and a publicly available online self-assessment tool.
Results:
We developed a SIMM comprising seven maturity levels across five domains: data collection policies, data collection methods and technologies, technology platforms for analysis and visualization, analytics capacity, and operational and strategic impact. The evaluation demonstrated relatively high maturity in analytics and technological capacity, but more moderate maturity in operational and strategic impact among academic medical centers. Changes made to the tool in between rounds improved its ability to discriminate between intermediate maturity levels.
Conclusion:
The SIMM can help organizations identify current gaps and next steps in improving SEDoH informatics. Improving the collection and use of SEDoH data is one important component of addressing health inequities.
White kidney bean extract (WKBE) is a nutraceutical often advocated as an anti-obesity agent. The main proposed mechanism for these effects is alpha-amylase inhibition, thereby slowing carbohydrate digestion and absorption. Thus, it is possible that WKBE could impact the gut microbiota and modulate gut health. We investigated the effects of supplementing 20 healthy adults with WKBE for 1 week in a randomised, placebo-controlled crossover trial on the composition of the gut microbiota, gastrointestinal (GI) inflammation (faecal calprotectin), GI symptoms, and stool habits. We conducted in vitro experiments and used a gut model system to explore potential inhibition of alpha-amylase. We gained qualitative insight into participant experiences of using WKBE via focus groups. WKBE supplementation decreased the relative abundance of Bacteroidetes and increased that of Firmicutes, however, there were no significant differences in post-intervention gut microbiota measurements between the WKBE and control. There were no significant effects on GI inflammation or symptoms related to constipation, or stool consistency or frequency. Our in vitro and gut model system analyses showed no effects of WKBE on alpha-amylase activity. Our findings suggest that WKBE may modulate the gut microbiota in healthy adults, however, the underlying mechanism is unlikely due to active site inhibition of alpha-amylase.
The severe acute respiratory syndrome coronavirus disease-2 (SARS-CoV-2) pandemic of 2020-2021 created unprecedented challenges for clinicians in critical care transport (CCT). These CCT services had to rapidly adjust their clinical approaches to evolving patient demographics, a preponderance of respiratory failure, and transport utilization stratagem. Organizations had to develop and implement new protocols and guidelines in rapid succession, often without the education and training that would have been involved pre-coronavirus disease 2019 (COVID-19). These changes were complicated by the need to protect crew members as well as to optimize patient care. Clinical initiatives included developing an awake proning transport protocol and a protocol to transport intubated proned patients. One service developed a protocol for helmet ventilation to minimize aerosolization risks for patients on noninvasive positive pressure ventilation (NIPPV). While these clinical protocols were developed specifically for COVID-19, the growth in practice will enhance the care of patients with other causes of respiratory failure. Additionally, these processes will apply to future respiratory epidemics and pandemics.
Some psychiatric disorders have been associated with increased risk of miscarriage. However, there is a lack of studies considering a broader spectrum of psychiatric disorders to clarify the role of common as opposed to independent mechanisms.
Aims
To examine the risk of miscarriage among women diagnosed with psychiatric conditions.
Method
We studied registered pregnancies in Norway between 2010 and 2016 (n = 593 009). The birth registry captures pregnancies ending in gestational week 12 or later, and the patient and general practitioner databases were used to identify miscarriages and induced abortions before 12 gestational weeks. Odds ratios of miscarriage according to 12 psychiatric diagnoses were calculated by logistic regression.
Miscarriage risk was increased among women with bipolar disorders (adjusted odds ratio 1.35, 95% CI 1.26–1.44), personality disorders (adjusted odds ratio 1.32, 95% CI 1.12–1.55), attention-deficit hyperactivity disorder (adjusted odds ratio 1.27, 95% CI 1.21–1.33), conduct disorders (1.21, 95% CI 1.01, 1.46), anxiety disorders (adjusted odds ratio 1.25, 95% CI 1.23–1.28), depressive disorders (adjusted odds ratio 1.25, 95% CI 1.23–1.27), somatoform disorders (adjusted odds ratio 1.18, 95% CI 1.07–1.31) and eating disorders (adjusted odds ratio 1.14, 95% CI 1.08–1.22). The miscarriage risk was further increased among women with more than one psychiatric diagnosis. Our findings were robust to adjustment for other psychiatric diagnoses, chronic somatic disorders and substance use disorders. After mutual adjustment for co-occurring psychiatric disorders, we also observed a modest increased risk among women with schizophrenia spectrum disorders (adjusted odds ratio 1.22, 95% CI 1.03–1.44).
Conclusions
A wide range of psychiatric disorders were associated with increased risk of miscarriage. The heightened risk of miscarriage among women diagnosed with psychiatric disorders highlights the need for awareness and surveillance of this risk group in antenatal care.
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.
Clinicians need guidance to address the heterogeneity of treatment responses of patients with major depressive disorder (MDD). While prediction schemes based on symptom clustering and biomarkers have so far not yielded results of sufficient strength to inform clinical decision-making, prediction schemes based on big data predictive analytic models might be more practically useful.
Method.
We review evidence suggesting that prediction equations based on symptoms and other easily-assessed clinical features found in previous research to predict MDD treatment outcomes might provide a foundation for developing predictive analytic clinical decision support models that could help clinicians select optimal (personalised) MDD treatments. These methods could also be useful in targeting patient subsamples for more expensive biomarker assessments.
Results.
Approximately two dozen baseline variables obtained from medical records or patient reports have been found repeatedly in MDD treatment trials to predict overall treatment outcomes (i.e., intervention v. control) or differential treatment outcomes (i.e., intervention A v. intervention B). Similar evidence has been found in observational studies of MDD persistence-severity. However, no treatment studies have yet attempted to develop treatment outcome equations using the full set of these predictors. Promising preliminary empirical results coupled with recent developments in statistical methodology suggest that models could be developed to provide useful clinical decision support in personalised treatment selection. These tools could also provide a strong foundation to increase statistical power in focused studies of biomarkers and MDD heterogeneity of treatment response in subsequent controlled trials.
Conclusions.
Coordinated efforts are needed to develop a protocol for systematically collecting information about established predictors of heterogeneity of MDD treatment response in large observational treatment studies, applying and refining these models in subsequent pragmatic trials, carrying out pooled secondary analyses to extract the maximum amount of information from these coordinated studies, and using this information to focus future discovery efforts in the segment of the patient population in which continued uncertainty about treatment response exists.
In a 1-year survey at a university hospital we found that 20·6% (81/392) of patients with antibiotic associated diarrohea where positive for C. difficile. The most common PCR ribotypes were 012 (14·8%), 027 (12·3%), 046 (12·3%) and 014/020 (9·9). The incidence rate was 2·6 cases of C. difficile infection for every 1000 outpatients.
To develop latent classes of exposure to traumatic experiences before the age of 13 years in an urban community sample and to use these latent classes to predict the development of negative behavioral outcomes in adolescence and young adulthood.
Method
A total of 1815 participants in an epidemiologically based, randomized field trial as children completed comprehensive psychiatric assessments as young adults. Reported experiences of nine traumatic experiences before age 13 years were used in a latent class analysis to create latent profiles of traumatic experiences. Latent classes were used to predict psychiatric outcomes at age ⩾13 years, criminal convictions, physical health problems and traumatic experiences reported in young adulthood.
Results
Three latent classes of childhood traumatic experiences were supported by the data. One class (8% of sample), primarily female, was characterized by experiences of sexual assault and reported significantly higher rates of a range of psychiatric outcomes by young adulthood. Another class (8%), primarily male, was characterized by experiences of violence exposure and reported higher levels of antisocial personality disorder and post-traumatic stress. The final class (84%) reported low levels of childhood traumatic experiences. Parental psychopathology was related to membership in the sexual assault group.
Conclusions
Classes of childhood traumatic experiences predict specific psychiatric and behavioral outcomes in adolescence and young adulthood. The long-term adverse effects of childhood traumas are primarily concentrated in victims of sexual and non-sexual violence. Gender emerged as a key covariate in the classes of trauma exposure and outcomes.
The first aim was to use confirmatory factor analysis (CFA) to test a hypothesis that two factors (internalizing and externalizing) account for lifetime co-morbid DSM-IV diagnoses among adults with bipolar I (BPI) disorder. The second aim was to use confirmatory latent class analysis (CLCA) to test the hypothesis that four clinical subtypes are detectible: pure BPI; BPI plus internalizing disorders only; BPI plus externalizing disorders only; and BPI plus internalizing and externalizing disorders.
Method
A cohort of 699 multiplex BPI families was studied, ascertained and assessed (1998–2003) by the National Institute of Mental Health Genetics Initiative Bipolar Consortium: 1156 with BPI disorder (504 adult probands; 594 first-degree relatives; and 58 more distant relatives) and 563 first-degree relatives without BPI. Best-estimate consensus DSM-IV diagnoses were based on structured interviews, family history and medical records. MPLUS software was used for CFA and CLCA.
Results
The two-factor CFA model fit the data very well, and could not be improved by adding or removing paths. The four-class CLCA model fit better than exploratory LCA models or post-hoc-modified CLCA models. The two factors and four classes were associated with distinctive clinical course and severity variables, adjusted for proband gender. Co-morbidity, especially more than one internalizing and/or externalizing disorder, was associated with a more severe and complicated course of illness. The four classes demonstrated significant familial aggregation, adjusted for gender and age of relatives.
Conclusions
The BPI two-factor and four-cluster hypotheses demonstrated substantial confirmatory support. These models may be useful for subtyping BPI disorders, predicting course of illness and refining the phenotype in genetic studies.
To examine cross-national patterns and correlates of lifetime and 12-month comorbid DSM-IV anxiety disorders among people with lifetime and 12-month DSM-IV major depressive disorder (MDD).
Method.
Nationally or regionally representative epidemiological interviews were administered to 74 045 adults in 27 surveys across 24 countries in the WHO World Mental Health (WMH) Surveys. DSM-IV MDD, a wide range of comorbid DSM-IV anxiety disorders, and a number of correlates were assessed with the WHO Composite International Diagnostic Interview (CIDI).
Results.
45.7% of respondents with lifetime MDD (32.0–46.5% inter-quartile range (IQR) across surveys) had one of more lifetime anxiety disorders. A slightly higher proportion of respondents with 12-month MDD had lifetime anxiety disorders (51.7%, 37.8–54.0% IQR) and only slightly lower proportions of respondents with 12-month MDD had 12-month anxiety disorders (41.6%, 29.9–47.2% IQR). Two-thirds (68%) of respondents with lifetime comorbid anxiety disorders and MDD reported an earlier age-of-onset (AOO) of their first anxiety disorder than their MDD, while 13.5% reported an earlier AOO of MDD and the remaining 18.5% reported the same AOO of both disorders. Women and previously married people had consistently elevated rates of lifetime and 12-month MDD as well as comorbid anxiety disorders. Consistently higher proportions of respondents with 12-month anxious than non-anxious MDD reported severe role impairment (64.4 v. 46.0%; χ21 = 187.0, p < 0.001) and suicide ideation (19.5 v. 8.9%; χ21 = 71.6, p < 0.001). Significantly more respondents with 12-month anxious than non-anxious MDD received treatment for their depression in the 12 months before interview, but this difference was more pronounced in high-income countries (68.8 v. 45.4%; χ21 = 108.8, p < 0.001) than low/middle-income countries (30.3 v. 20.6%; χ21 = 11.7, p < 0.001).
Conclusions.
Patterns and correlates of comorbid DSM-IV anxiety disorders among people with DSM-IV MDD are similar across WMH countries. The narrow IQR of the proportion of respondents with temporally prior AOO of anxiety disorders than comorbid MDD (69.6–74.7%) is especially noteworthy. However, the fact that these proportions are not higher among respondents with 12-month than lifetime comorbidity means that temporal priority between lifetime anxiety disorders and MDD is not related to MDD persistence among people with anxious MDD. This, in turn, raises complex questions about the relative importance of temporally primary anxiety disorders as risk markers v. causal risk factors for subsequent MDD onset and persistence, including the possibility that anxiety disorders might primarily be risk markers for MDD onset and causal risk factors for MDD persistence.
Although variation in the long-term course of major depressive disorder (MDD) is not strongly predicted by existing symptom subtype distinctions, recent research suggests that prediction can be improved by using machine learning methods. However, it is not known whether these distinctions can be refined by added information about co-morbid conditions. The current report presents results on this question.
Method.
Data came from 8261 respondents with lifetime DSM-IV MDD in the World Health Organization (WHO) World Mental Health (WMH) Surveys. Outcomes included four retrospectively reported measures of persistence/severity of course (years in episode; years in chronic episodes; hospitalization for MDD; disability due to MDD). Machine learning methods (regression tree analysis; lasso, ridge and elastic net penalized regression) followed by k-means cluster analysis were used to augment previously detected subtypes with information about prior co-morbidity to predict these outcomes.
Results.
Predicted values were strongly correlated across outcomes. Cluster analysis of predicted values found three clusters with consistently high, intermediate or low values. The high-risk cluster (32.4% of cases) accounted for 56.6–72.9% of high persistence, high chronicity, hospitalization and disability. This high-risk cluster had both higher sensitivity and likelihood ratio positive (LR+; relative proportions of cases in the high-risk cluster versus other clusters having the adverse outcomes) than in a parallel analysis that excluded measures of co-morbidity as predictors.
Conclusions.
Although the results using the retrospective data reported here suggest that useful MDD subtyping distinctions can be made with machine learning and clustering across multiple indicators of illness persistence/severity, replication with prospective data is needed to confirm this preliminary conclusion.
Methane (CH4) emissions by dairy cows vary with feed intake and diet composition. Even when fed on the same diet at the same intake, however, variation between cows in CH4 emissions can be substantial. The extent of variation in CH4 emissions among dairy cows on commercial farms is unknown, but developments in methodology now permit quantification of CH4 emissions by individual cows under commercial conditions. The aim of this research was to assess variation among cows in emissions of eructed CH4 during milking on commercial dairy farms. Enteric CH4 emissions from 1964 individual cows across 21 farms were measured for at least 7 days/cow using CH4 analysers at robotic milking stations. Cows were predominantly of Holstein Friesian breed and remained on the same feeding systems during sampling. Effects of explanatory variables on average CH4 emissions per individual cow were assessed by fitting a linear mixed model. Significant effects were found for week of lactation, daily milk yield and farm. The effect of milk yield on CH4 emissions varied among farms. Considerable variation in CH4 emissions was observed among cows after adjusting for fixed and random effects, with the CV ranging from 22% to 67% within farms. This study confirms that enteric CH4 emissions vary among cows on commercial farms, suggesting that there is considerable scope for selecting individual cows and management systems with reduced emissions.
This chapter discusses the diagnosis, evaluation and management of upper airway emergencies. It presents special circumstances with regard to foreign bodies in the airway. The first step in the evaluation of the patient with suspected upper airway emergency is to determine the need for emergent intubation or surgical airway. If possible, a brief history should be obtained focusing on history of cancer, allergies, exposure to medications including ACE inhibitors, family history of C1 esterase inhibitor deficiency, trauma, and recent surgery. A targeted physical examination should include assessment for stridor, hoarseness, urticaria, edema of skin, lips, mouth, and throat. Given the high-risk, time-sensitive nature of these presentations, all practitioners should be familiar with their local resources, algorithms, and airway management options prior to seeing patients. In patients with a rapidly evolving upper airway obstruction, awake evaluation can provide invaluable information about potential complications before paralytics are administered.
Antibiotic susceptibilities of large cohorts of Enterobacteriaceae isolated from urine collected in the community are scarce. We report the susceptibilities of Enterobacteriaceae isolated from urine of non-selected community populations in a metropolitan area (Leeds and Bradford, UK) over 2 years. Isolates (n = 6614) were identified as follows: Escherichia coli (n = 5436), Klebsiella spp. (n = 525), Proteus mirabilis (n = 305), and 15 other species (n = 290); 58 isolates were unidentified. Ampicillin resistance was observed in 53% E. coli and 28% P. mirabilis; ⩾34% E. coli and P. mirabilis were non-susceptible to trimethoprim compared to 20% Klebsiella spp.; nitrofurantoin resistance was observed in 3% E. coli and 15% Klebsiella spp. The occurrence of extended-spectrum β-lactamases (ESBL) was low (6%), as was non-susceptibility to carbapenems, cefipime and tigecycline (<2%). Further surveillance is required to monitor this level of resistance and additional clinical studies are needed to understand the impact on the outcome of current empirical prescribing decisions.