To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Most of the real-life populations are heterogeneous and homogeneity is often just a simplifying assumption for the relevant statistical analysis. Mixtures of lifetime distributions that correspond to homogeneous subpopulations were intensively studied in the literature. Various distributional and stochastic properties of finite and continuous mixtures were discussed. In this paper, following recent publications, we develop further a mixture concept in the form of the generalized α-mixtures that include all mixture models that are widely explored in the literature. We study some main stochastic properties of the suggested mixture model, that is, aging and appropriate stochastic comparisons. Some relevant examples and counterexamples are given to illustrate our findings.
In this article, we derive a closed-form pricing formula for catastrophe equity put options under a stochastic interest rate framework. A distinguishing feature of the proposed solution is its simplified form in contrast to several recently published formulae that require evaluating several layers of infinite sums of $n$-fold convoluted distribution functions. As an application of the proposed formula, we consider two different frameworks and obtain the closed-form formula for the joint characteristic function of the asset price and the losses, which is the only required ingredient in our pricing formula. The prices obtained by the newly derived formula are compared with those obtained using Monte-Carlo simulations to show the accuracy of our formula.
In this study we compared radiation dose received by organs at risk (OARs) after breast conservation surgery(BCS) and mastectomy in patients with left breast cancer.
Materials and methods
Total 30 patients, 15 each of BCS and mastectomy were included in this study. Planning Computerised Tomography (CT) was done for each patient. Chest wall, whole breast, heart, lungs, LAD, proximal and distal LAD, and contra lateral breast was contoured for each patient. Radiotherapy plans were made by standard tangent field. Dose prescribed was 40Gy/16#/3 weeks. Mean heart dose, LAD, proximal and distal LAD, mean and V5 of right lung, and mean, V5, V10 and V20 of left lung, mean dose and V2 of contra lateral breast were calculated for each patient and compared between BCS and mastectomy patients using student’s T test.
Results
Mean doses to the heart, LAD, proximal LAD and distal LAD were 3.364Gy, 16.06Gy, 2.7Gy, 27.5Gy; and 4.219Gy, 14.653Gy, 4.306Gy, 24.6Gy, respectively for mastectomy and BCS patients. Left lung mean dose, V5, V10 and V20 were 5.96Gy, 16%, 14%, 12.4%; and 7.69Gy, 21%, 18% and 16% in mastectomy and BCS patients, respectively. There was no statistical significant difference in the doses to the heart and left lung between mastectomy and BCS. Mean dose to the right lung was significantly less in mastectomy as compared to BCS, 0.29Gy vs. 0.51Gy, respectively (p = 0.007). Mean dose to the opposite breast was significantly lower in patients with mastectomy than BCS (0.54Gy Vs 0.37Gy, p = 0.007). The dose to the distal LAD was significantly higher than proximal LAD both in BCS (24.6Gy Vs 4.3Gy, p = <0.0001) and mastectomy (27.5Gy Vs 2.7Gy, p = <0.0001) patients.
Conclusion
There was no difference in doses received by heart and left lung between BCS and mastectomy patients. Mean doses to the right lung and breast were significantly less in mastectomy patients.
The joint signatures of binary-state and multi-state (semi-coherent or mixed) systems with i.i.d. (independent and identically distributed) binary-state components are considered in this work. For the comparison of pairs of binary-state systems of different sizes, transformation formulas of their joint signatures are derived by using the concept of equivalent systems and a generalized triangle rule for order statistics. Similarly, for facilitating the comparison of pairs of multi-state systems of different sizes, transformation formulas of their multi-state joint signatures are also derived. Some examples are finally presented to illustrate and to verify the theoretical results established here.
The COVID-19 pandemic has exposed the need for more contactless interactions, leading to an acceleration in the design, development, and deployment of digital identity tools and contact-free solutions. A potentially positive outcome of the current crisis could be the development of a more data privacy and human rights compliant framework for digital identity. However, for such a framework to thrive, two essential conditions must be met: (1) respect for and protection of data privacy irrespective of the type of architecture or technology chosen and (2) consideration of the broader impacts that digital identity can have on individuals’ human rights. The article draws on legal, technology-facing, and policy-oriented academic literature to evaluate each of these conditions. It then proposes two ways to leverage the process of digitalization strengthened by the pandemic: a data privacy-centric and a human rights-based approach to digital identity solutions fit for post-COVID-19 societies.
Patient-specific surgical simulations require the patient-specific identification of the constitutive parameters. The sparsity of the experimental data and the substantial noise in the data (e.g., recovered during surgery) cause considerable uncertainty in the identification. In this exploratory work, parameter uncertainty for incompressible hyperelasticity, often used for soft tissues, is addressed by a probabilistic identification approach based on Bayesian inference. Our study particularly focuses on the uncertainty of the model: we investigate how the identified uncertainties of the constitutive parameters behave when different forms of model uncertainty are considered. The model uncertainty formulations range from uninformative ones to more accurate ones that incorporate more detailed extensions of incompressible hyperelasticity. The study shows that incorporating model uncertainty may improve the results, but this is not guaranteed.
This article concerns the tail probabilities of a light-tailed Markov-modulated Lévy process stopped at a state-dependent Poisson rate. The tails are shown to decay exponentially at rates given by the unique positive and negative roots of the spectral abscissa of a certain matrix-valued function. We illustrate the use of our results with an application to the stationary distribution of wealth in a simple economic model in which agents with constant absolute risk aversion are subject to random mortality and income fluctuation.
We propose and analyze a temporal concatenation heuristic for solving large-scale finite-horizon Markov decision processes (MDP), which divides the MDP into smaller sub-problems along the time horizon and generates an overall solution by simply concatenating the optimal solutions from these sub-problems. As a “black box” architecture, temporal concatenation works with a wide range of existing MDP algorithms. Our main results characterize the regret of temporal concatenation compared to the optimal solution. We provide upper bounds for general MDP instances, as well as a family of MDP instances in which the upper bounds are shown to be tight. Together, our results demonstrate temporal concatenation's potential of substantial speed-up at the expense of some performance degradation.
With the increased availability of data and the capacity to make sense of these data, computational approaches to analyze, model and simulate public policy evolved toward viable instruments to deliberate, plan, and evaluate them in different areas of application. Such examples include infrastructure, mobility, monetary, or austerity policies, policies on different aspects of societies (health, pandemic, skills, inclusion, etc.). Technological advances along with the evolution of theoretical models and frameworks open valuable opportunities, while at the same time, posing new challenges. The paper investigates the current state of research in the domain and aims at identifying the most pressing areas for future research. This is done through both literature research of policy modeling and the analysis of research and innovation projects that either focus on policy modeling or involve it as a significant component of the research design. In the paper, 16 recent projects involving the keyword policy modeling were analyzed. The majority of projects concern the application of policy modeling to a specific domain or area of interest, while several projects tackled the cross-cutting topics (risk and crisis management). The detailed analysis of the projects led to topics of future research in the domain of policy modeling. Most prominent future research topics in policy modeling include stakeholder involvement approaches, applicability of research results, handling complexity of models, integration of models from different modeling and simulation paradigms and approaches, visualization of simulation results, real-time data processing, and scalability. These aspects require further research to appropriately contribute to further advance the field.
Data sharing efforts to allow underserved groups and organizations to overcome the concentration of power in our data landscape. A few special organizations, due to their data monopolies and resources, are able to decide which problems to solve and how to solve them. But even though data sharing creates a counterbalancing democratizing force, it must nevertheless be approached cautiously. Underserved organizations and groups must navigate difficult barriers related to technological complexity and legal risk. To examine what those common barriers are, one type of data sharing effort—data trusts—are examined, specifically the reports commenting on that effort. To address these practical issues, data governance technologies have a large role to play in democratizing data trusts safely and in a trustworthy manner. Yet technology is far from a silver bullet. It is dangerous to rely upon it. But technology that is no-code, flexible, and secure can help more responsibly operate data trusts. This type of technology helps innovators put relationships at the center of their efforts.
One commonly used analytic technique that examines predictors of a binary outcome (disease/no disease, test positive/test negative, etc.) is called a logistic regression. As it is for other types of regression analyses, the final set of predictors that are added to the regression equation must all be present for each case that is included in the final analysis pool. The most important point when building a model is not to enter all variables in a haphazard fashion. There are specific steps to arriving at the final set of predictors.
Scientists are a curious breed. They seek knowledge to advance science to improve health and innovate novel therapies and techniques. Most scientists will develop hypotheses and gather evidence to support or refute their scientific suspicions or hypotheses. Therefore, analysis of the data should be intentional and directed at the research question in hand.
This study aimed to investigate the characteristics of uncertainty in illness and the coping styles of patients with severe coronavirus 2019 (COVID-19) and to explore their relationship to provide effective guidance for clinical nursing. A cross-sectional survey was used to investigate 56 severe patients with COVID-19 in a designated hospital in Wuhan. A general information questionnaire, the Mishel Uncertainty in Illness Scale for Adults (MUIS-A) and the Medical Coping Modes Questionnaire (MCMQ) were used to collect the data. A statistical analysis was performed. The total score of the MUIS-A was a 66.29 ± 17.25 which was at a low level, while the total score of the MCMQ was 54.16 ± 6.39. The scores of facing and avoiding were significantly higher than those in the norm. The difference in the yielding dimension of patients with different family economic situations was statistically significant. The total score of MUIS-A correlated negatively with the coping style of facing and avoiding and positively correlated with the coping style of yielding. The coping style of patients was one of the factors influencing uncertainty in illness. Nursing staff need to pay close attention to the psychological state of their patients, understand their coping styles and actively correct negative coping styles to reduce the uncertainty in illness and promote physical as well as mental recovery.
Again, parametric procedures are preferred over non-parametric because parametric analyses are more robust in that they use the actual values of the distribution in the analysis. If the data are incapable of becoming “normalized” by transforming the distribution to approximate a normal distribution, such as taking log10 of all HIV viral load values, non-parametric tests should be applied to examine your data. Let’s examine some non-parametric approaches to analyzing non-normally distributed data. In general, two tests, the Mann–Whitney U test and Spearman rank test, fall into this analytic category. In short, the Mann–Whitney U test is the non-parametric equivalent to the T-test and the Spearman rank test is the non-parametric equivalent to the Pearson correlation.
This chapter is dedicated mainly to laboratory professionals who need to design tests starting with sample size considerations and power. It provides multiple examples of calculatng sample size and power in circumstances of varying disease prevalence, varying confidence intervals, varying levels of power, and for different test indices. There may be instances when some of these techniques are used by epidemiologists and clinicians, especially in situations when they must evaluate diagnostic tests before implementing testing in the field. The sensitivity, specificity, and positive and negative predictive value would be indices of interest to both laboratory and clinical professionals. The chapter also gets into concepts that explain differences in commonly used terms such as precision vs. reproducibility, validity vs. reliability, inter-lot variation vs. intra-lot variation. For those in the drug development industry, the concepts of pharmacodynamics and pharmacokinetics are also described. Calculations of the coefficient of variation are included.