To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is difficult to understand the safety profile of drugs based on a single clinical trial since clinical trials are often designed to prove efficacies, and sample size is not powered for safety assessment. Thus, meta-analysis would be a valuable tool to infer the safety profiles utilizing multiple studies. Individual clinical trials usually report the incidence proportions of adverse events (AEs) observed in the study. The follow-up duration may be study-specific, and furthermore different between the treatment groups within a single study. It often occurs in oncology clinical trials and if this is the case, it is hard to interpret the aggregated relative risk of AEs and compare the risk of AEs between the treatment groups with the standard meta-analysis techniques. The progression-free survival or the overall survival is often used as the primary endpoint in oncology clinical trials and the Kaplan–Meier estimates of the survival functions for the primary endpoint are often demonstrated graphically, which give us information of the follow-up duration of the AEs. We propose novel meta-analysis methods for AEs that address differences in follow-up durations by efficiently utilizing the Kaplan–Meier estimates of the primary endpoint. We adapt our approach using both simulated data and real data from a meta-analysis of bevacizumab. Simulation studies demonstrate that the proposed methods perform well when follow-up time differs between trials and groups.
This study aims to provide an updated systematic review on the clinicopathological features, treatment modalities and survival outcomes on SMARCB1-deficient sinonasal carcinoma.
Methods
Five databases were searched: PubMed, Cochrane, Embase, Web of Science and Scopus. Extracted information include demographic data, clinicopathological characteristics and survival outcomes.
Results
A total of 70 studies were included making up a total of 372 patients. Univariable and multivariable analysis performed for prognostic factors of SDSC showed that N+ disease was a statistically significant poor prognostic factor for overall survival, while surgical treatment was a favourable prognostic factor. Treatment with surgery combined with chemotherapy and/or radiotherapy conferred significantly better prognosis compared to chemotherapy or radiotherapy alone.
Conclusion
Our study suggests that N+ disease and isolated treatment with radiotherapy or chemotherapy alone are significant poor prognostic factors for SMARCB1-deficient sinonasal carcinoma, while patients who undergo surgical treatment have significantly better prognosis.
This paper introduces a novel expectation-maximization (EM) algorithm for estimating general phase-type (PH) distributions from left-truncated and right-censored (LTRC) data, a common challenge in survival analysis. The proposed algorithm is highly efficient with computational complexity that scales with the number of nonzero elements in the generator matrix. This feature makes the estimation of high-dimensional, sparse PH models computationally tractable and enables the practical use of the computationally intensive extended information criterion for model selection. Numerical experiments demonstrate its significant speed advantage over a modern benchmark and the applicability of PH models to complex lifetime data.
Late-life depression (LLD) is associated with cognitive impairment and an elevated risk of dementia, yet the influence of age at depression onset on cognitive prognosis remains unclear. Emerging evidence suggests that late-onset depression, defined as first depressive episode in later adulthood, may reflect distinct neuropathological mechanisms and predict more severe cognitive decline and greater dementia risk than early-onset depression.
Aims
This study aimed to investigate whether late-onset depression is linked to domain-specific cognitive impairment and higher risk of incident dementia among older adults with major depressive disorder.
Method
We analysed UK Biobank data from older adults (aged ≥60 years) with primary care linkage, classifying participants into depression-free controls, early-life depression, late-life depression with early onset (LLD-EO) and late-life depression with late onset (LLD-LO). Cognitive performance across these five domains was assessed cross-sectionally at baseline using touchscreen tasks. Incident dementia was evaluated prospectively using clinical records up to 2022. Multi-level models with inverse-probability weighting and survey-adjusted mixed modelling were applied to assess group differences in cognitive function, controlling for demographic covariates, lifestyle factors and physical and mental health conditions. A Cox regression model was employed to estimate dementia risk among groups.
Results
Among 75 064 participants aged ≥60 years, the LLD-LO group (n = 4858) showed significantly worse cognitive performance than healthy controls, particularly on fluid intelligence and visuospatial memory. The LLD-LO group performed worse than LLD-EO on fluid intelligence. During follow-up, LLD-LO was associated with a higher risk of incident dementia (hazard ratio 1.42–1.52) across all adjusted models. Deficits in fluid intelligence and visuospatial memory partially mediated the link between LLD-LO and subsequent dementia.
Conclusions
Late-onset depression showed more severe impairment in fluid intelligence compared with LLD-EO. Late-onset depression was associated with increased incident dementia compared with depression-free individuals.
Although often associated with ageing, disability is becoming increasingly prevalent among young adults. While disability can pose a substantial psychological burden for young adults on critical pathways to establish the foundations for their future, the mental health risks faced by this population remain underexplored.
Aims
This study aimed to (1) assess the association between disability – including its presence, severity and type – and the risk of depressive and anxiety disorders, and (2) examine whether this association varies across sociodemographic factors, health behaviours and comorbidities in a young adult population.
Methods
We conducted a population-based cohort study using linked data from the National Disability Registry and the National Health Insurance Database of South Korea. A total of 6,058,290 individuals aged 20–39 years who underwent health check-ups between 2009 and 2012 were followed through 2022. Cox proportional hazards models were used to estimate adjusted hazard ratios (aHRs) for depressive and anxiety disorders.
Results
Individuals with disabilities had significantly higher risks of depressive (aHR: 1.58, 95% CI: 1.55–1.60) and anxiety disorders (aHR: 1.50, 95% CI: 1.42–1.59). Increased risks were consistently observed across various disability types with the highest risk observed for mental health-related disabilities in depression (aHR: 4.98, 95% CI 4.62–5.37) and epilepsy-related disabilities in anxiety disorders (aHR: 12.05, 95% CI 8.73–16.63). Subgroup analyses revealed stronger associations among individuals in their 20s, low-income groups, non-smokers and those abstaining from alcohol, compared to their respective counterparts.
Conclusions
Young adults with disabilities, a population that has been relatively overlooked in policy discussions, warrant greater policy attention in relation to their mental health.
In this article, the effects of regional autocratic linkage on the survival of autocratic regimes are analysed. Scholars have suggested that regional factors shape regime survival through processes of diffusion. However, in most accounts, diffusion is simply derived from characteristics of the region, such as the number or proportion of regional autocracies. In contrast, it is argued here that it is the actual linkages between countries that must be examined. Regional political, economic and social ties between autocratic regimes create domestic and external stakes in the regime, counterweigh democratisation pressure and facilitate autocratic learning. The study employs the average volume of trade, migration and diplomatic exchanges between autocratic regimes within a region as proxies for regional autocratic linkage, and asserts that regional autocratic linkage is on the rise. Applying Cox survival models on a dataset of regional autocratic linkage and regime survival between 1946 and 2009, it is found that regional autocratic linkage significantly reduces the likelihood of autocratic regime breakdown. These effects hold when the proportion of autocratic regimes within a region is controlled for, suggesting that one must look beyond the characteristics of the countries within a region and focus on the ties and linkages between them.
The objective of this article is to contribute to an understanding of the evolution of a population of social economy enterprises faced with the economic crisis, namely by referring to the case of Montreal. We apply a two-step approach. For one, we use an innovative discrete-time survival model that takes spatial heterogeneity into account. In a second step, this model is used to predict the survival of different forms of the social economy, according to various proposed typologies for identifying hybrid organizational forms. It is understood that certain organizational forms (professional social economy) have fared better than others (emerging social economy). Organizations combining several sources of financing and several forms of paid or volunteer work likewise have greater chances of survival.
Co-optation is one of the most important concepts that have been discussed in recent years to explain the persistence of autocratic regimes. Institutions like parties, legislatures, and elections have all been shown to fulfil a co-optation function. However, a broader assessment of co-optation beyond an institutional focus has yet to be undertaken. This article addresses this lacuna and presents a comprehensive concept and operationalisation of co-optation in autocratic regimes. The article argues that co-optation is constituted by the compensation of the regime’s vulnerability to threats posed by powerful societal pressure groups. The article highlights how compensation of vulnerability can be achieved through institutional inclusion or material benefits, and how this has to be tailored to the respective pressure group’s requirements. I collect twenty-six indicators of vulnerability and compensation related to six societal pressure groups and aggregate them into an index of co-optation that reflects the structure of the theoretical concept. The index is evaluated by examining distributions within and across categories of autocratic legislatures, and by testing its effect on autocratic regime breakdown in a set of Cox survival models.
Timely dissemination of clinical trial results is essential to advance knowledge, guide practice, and improve outcomes, yet many trials remain unpublished, limiting impact. We examine what drives publication and timelines across three major clinical domains.
Methods:
We analyzed study design and factors associated with dissemination of interventional trials, focusing on cardiovascular disease (CVD), cancer, and COVID-19. A total of 10,785 trials (CVD: 5929; cancer: 4210; COVID-19: 646) were linked to PubMed publications using National Clinical Trial identifiers. Study design, operational, and transparency-related features were assessed as predictors of time to publication, defined as the interval from study completion to first publication, using Cox proportional hazards model.
Results:
COVID-19 trials had the highest publication rate (49.6%), followed by CVD (42.3%) and cancer (32.9%), likely reflecting pandemic-related prioritization. Faster publication was associated with larger enrollment, more sites, result posting, randomization, DMC presence, and higher blinding levels (all p < 0.05). Slower publication was linked to supportive care or diagnostic trials (CVD), basic science (cancer), and later COVID-19 trial completion. In subgroups, U.S. facility presence (CVD) and phase 3 design (cancer) predicted faster publication, while healthy volunteer inclusion (CVD) predicted slower publication. Among DMC trials, more secondary outcomes were linked to faster publication across all disease areas.
Conclusions:
Key study design and operational factors consistently predict whether and when trials are published. Strengthening methodological rigor, result reporting, and multi-site collaboration may accelerate timely dissemination into peer-reviewed literature.
In this study, we investigate the impact of the age of prime ministers and ministers on the stability of governments across 21 democracies. We examine this issue by using Cox survival analysis, leveraging an original dataset and adopting a comparative perspective. The findings of the study document that younger prime ministers face a lower risk of government discretionary termination compared to their older counterparts. This effect does not appear to be statistically significant for cabinet ministers. By shedding light on this uncharted relationship, we contribute to the flourishing literature on youth representation in politics and the established research agenda on the factors affecting the survival in office of democratic governments. We conclude the study by discussing the implications of the findings for democracy and suggesting avenues for future research.
Palliative care enhances life, but rural Australia faces significant inequities, and psychosocial distress, an important yet often overlooked aspect, is under-recognized in these settings. This study examines how psychosocial distress evolves in rural palliative patients using the Death and Dying Distress Scale (DADDS).
Methods
A longitudinal study was conducted with palliative care patients in rural hospitals on Australia’s east coast. Distress levels were measured using DADDS at multiple timepoints. Mixed-effects models assessed distress trajectories, while survival analyses (Weibull model) examined whether average distress changes predicted survival duration. For comparability, DADDS scores in mixed-effects models were standardized (0–100%), whereas survival analyses used raw total score changes.
Results
Adjusted mean total DADDS was 37.14 ± 22.67, with highest distress in fear of suffering and pain (49.95 ± 26.56) and lowest in fear of sudden death (30.26 ± 30.24). Distress followed a U-shaped trajectory: peaking early (52.68), declining mid (29.85) and late stages (28.26), then rising near death (53.05) (EMMs). Statistically significant changes included declines from early to mid-stage (β = −22.84, p = 0.007) and increases from late to near-death (β = 24.79, p = 0.003). Distress increased most from late to near-death in fear of suffering and death (β = 27.38, p = 0.006) and declined most from early to mid-stage in fear of dying (β = 28.01, p = 0.007). Higher distress correlated with shorter survival; each one-point increase in distress linked to a 6.97% survival reduction (time ratio = 0.930, β = −0.070, p < 0.001).
Significance of results
Psychosocial distress peaks in early palliative care and near death and is associated with reduced survival. Support should prioritize fears of suffering and pain during these stages, address fear of the dying process earlier, and remain attentive to persistent concerns such as loss of time and opportunity.
Depression is the most common psychiatric disorder among patients with end-stage renal disease (ESRD), yet the risk factors for mortality in this population remain unclear.
Aims
To identify risk factors for mortality in ESRD patients with depression and assess the incidence of suicide attempts.
Method
We used Taiwan’s National Health Insurance Research Database to identify adult patients who initiated maintenance dialysis between 1997 and 2012. Two ESRD cohorts were established at a depression-to-non-depression ratio of 1:8, matched by age and gender (n = 3289 with depression; n = 26 312 without depression). Outcomes included all-cause mortality and suicide attempts, with additional subgroup analyses by baseline depression severity.
Results
ESRD patients with depression had a higher mortality risk (hazard ratio 1.15, 95% CI: 1.10–1.21) than those without. Risk factors for mortality included male gender, older age, diabetes and cardiovascular disease. Patients with depression also had a higher risk of suicide attempts (hazard ratio 3.02, 95% CI: 1.68–5.42). ESRD patients with severe depression had a significantly higher rate of hospital admissions for depression compared to those with non-severe depression (incidence rate ratio (IRR): 1.82, 95% CI: 1.14–2.93). Furthermore, patients with severe depression were associated with a significantly higher mortality rate compared to those without depression (IRR: 1.42, 95% CI: 1.15–1.76).
Conclusions
Depression is linked to poor survival in ESRD patients, with underlying comorbidities playing a key role in mortality. Given the increased risk of mortality, suicide attempts and hospital admissions, these high-risk patients require enhanced medical attention, particularly those with severe depression.
This study examines gender bias in the investigative work of medieval inquisitors, focusing on Albert of Castellario’s trial of the Waldensians in Giaveno, Italy, in 1335. Drawing upon advancements in sociological and criminological literature, we conceptualize an inquisitorial trial as a discretionary information-gathering endeavor contingent upon the inquisitor’s judgment in deciding which leads to pursue. Employing social network analysis and survival methods, we evaluate whether Albert demonstrated gender biases in his investigative decisions, particularly regarding the weight assigned to testimonies from men versus women. Our findings demonstrate that Albert was more inclined to investigate men and prioritize their testimonies, even where similar levels of incriminating evidence were present for both genders. These results highlight the influence of societal attitudes toward gender on inquisitorial practices, on the representativeness of historical records, and on prevailing understandings of heretical groups. Furthermore, this study underscores the broader utility of our methodological framework for addressing related historical inquiries, including the political motivations behind the medieval inquisition.
The Cox duration model serves as the basis for more complex duration models like competing risks, repeated events, and multistate models. These models make a number of assumptions, many of which can be assessed empirically, sometimes for substantive ends. We use Monte Carlo simulations to show the order in which practitioners assess these assumptions can impact the model’s final specification, and ultimately, can produce misleading inferences. We focus on three assumptions regarding model specification decisions: proportional hazards (PH), stratified baseline hazards, and stratum-specific covariate effects. Our results suggest checking the PH assumption before checking for stratum-specific covariate effects tends to produce the correct final specification most frequently. We reexamine a recent study of the timing of GATT/WTO applications to illustrate our points.
To examine the potential indirect effect of meal frequency on mortality via obesity indices.
Design:
Prospective cohort study
Setting:
Korean Genome and Epidemiology Study.
Participants:
This cohort study involved 148 438 South Korean adults aged 40 years and older.
Results:
Meal frequency at the baseline survey was assessed using a validated FFQ. Outcomes included all-cause mortality, cancer mortality and CVD mortality. Cox proportional hazards regression models were employed to examine the relationship between meal frequency and the risk of mortality. Mediation analyses were performed with changes in obesity indices (BMI and weight circumference (WC)) as mediators. In comparison to the three-time group, the once-per-day and four-times-per-day groups had a higher risk for all-cause mortality. The irregular frequency group had a higher risk for CVD mortality. Both once-per-day and four-times-per-day groups exhibited higher risks for cancer mortality. The effect of meal frequency on all-cause mortality was partially mediated by WC. For specific-cause mortality, similar mediation effects were found.
Conclusions:
The data suggests that three meals per day have a lower mortality and longer life expectancy compared with other meal frequencies. Increased waist circumference partially mediates this effect. These findings support the implementation of a strategy that addresses meal frequency and weight reduction together.
Developmental studies of mental disorders based on epidemiological data are often based on cross-sectional retrospective surveys. Under such designs, observations are right-censored, causing underestimation of lifetime prevalences and correlations, and inducing bias in latent trait models on the observations. In this paper we propose a Partial Likelihood (PL) method to estimate unbiased IRT models of lifetime predisposition to develop a certain outcome. A two-step estimation procedure corrects the IRT likelihood of outcome appearance with a function depending on (a) projected outcome frequencies at the end of the risk period, and (b) outcome censoring status at the time of the observation. Simulation results showed that the PL method yielded good recovery of true frequencies and intercepts. Slopes were best estimated when events were sufficiently correlated. When PL is applied to lifetime mental health disorders (assessed in the ESEMeD project surveys), estimated univariate prevalences were, on average, 1.4 times above raw estimates, and 2.06 higher in the case of bivariate prevalences.
Many large-scale standardized tests are intended to measure skills related to ability rather than the rate at which examinees can work. Time limits imposed on these tests make it difficult to distinguish between the effect of low proficiency and the effect of lack of time. This paper proposes a mixture cure-rate model approach to address this issue. Maximum likelihood estimation is proposed for parameter and variance estimation for three cases: when examinee parameters are to be estimated given precalibrated item parameters, when item parameters are to be calibrated given known examinee parameters, and when item parameters are to be estimated without assuming known examinee parameters. Large-sample properties are established for the cases under suitable regularity conditions. Simulation studies suggest that the proposed approach is appropriate for inferences concerning model parameters. In addition, not distinguishing between the effect of low proficiency and the effect of lack of time is shown to have considerable consequences for parameter estimation. A real data example is presented to demonstrate the new model. Choice of survival models for the latent power times is also discussed.
A version of the discrete proportional hazards model is developed for psychometrical applications. In such applications, a primary covariate that influences failure times is a latent variable representing a psychological construct. The Metropolis-Hastings algorithm is studied as a method for performing marginal likelihood inference on the item parameters. The model is illustrated with a real data example that relates the age at which teenagers first experience various substances to the latent ability to avoid the onset of such behaviors.
Classification and Regression Trees (CART), and their successors—bagging and random forests, are statistical learning tools that are receiving increasing attention. However, due to characteristics of censored data collection, standard CART algorithms are not immediately transferable to the context of survival analysis. Questions about the occurrence and timing of events arise throughout psychological and behavioral sciences, especially in longitudinal studies. The prediction power and other key features of tree-based methods are promising in studies where an event occurrence is the outcome of interest. This article reviews existing tree algorithms designed specifically for censored responses as well as recently developed survival ensemble methods, and introduces available computer software. Through simulations and a practical example, merits and limitations of these methods are discussed. Suggestions are provided for practical use.
The analysis of insurance and annuity products issued on multiple lives requires the use of statistical models which account for lifetime dependence. This paper presents a Dirichlet process mixture-based approach that allows to model dependent lifetimes within a group, such as married couples, accounting for individual as well as group-specific covariates. The model is analyzed in a fully Bayesian setting and illustrated to jointly model the lifetime of male–female couples in a portfolio of joint and last survivor annuities of a Canadian life insurer. The inferential approach allows to account for right censoring and left truncation, which are common features of data in survival analysis. The model shows improved in-sample and out-of-sample performance compared to traditional approaches assuming independent lifetimes and offers additional insights into the determinants of the dependence between lifetimes and their impact on joint and last survivor annuity prices.