To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Jayakrishnan Nair, Indian Institute of Technology, Bombay,Adam Wierman, California Institute of Technology,Bert Zwart, Stichting Centrum voor Wiskunde en Informatica (CWI), Amsterdam
Jayakrishnan Nair, Indian Institute of Technology, Bombay,Adam Wierman, California Institute of Technology,Bert Zwart, Stichting Centrum voor Wiskunde en Informatica (CWI), Amsterdam
An introduction to the emergence of heavy-tailed distributions in the context of additive processes.Stable distributions are introduced and the generalized central limit theorem is presented.Further, an example of the emergence of heavy tails in random walks is included.
Jayakrishnan Nair, Indian Institute of Technology, Bombay,Adam Wierman, California Institute of Technology,Bert Zwart, Stichting Centrum voor Wiskunde en Informatica (CWI), Amsterdam
Jayakrishnan Nair, Indian Institute of Technology, Bombay,Adam Wierman, California Institute of Technology,Bert Zwart, Stichting Centrum voor Wiskunde en Informatica (CWI), Amsterdam
An introduction to the class of subexponential distributions and the important properties of this class, including the catastrophe principle. Examples applying subexponential distributions to random sums and random walks are included.
Heavy tails –extreme events or values more common than expected –emerge everywhere: the economy, natural events, and social and information networks are just a few examples. Yet after decades of progress, they are still treated as mysterious, surprising, and even controversial, primarily because the necessary mathematical models and statistical methods are not widely known. This book, for the first time, provides a rigorous introduction to heavy-tailed distributions accessible to anyone who knows elementary probability. It tackles and tames the zoo of terminology for models and properties, demystifying topics such as the generalized central limit theorem and regular variation. It tracks the natural emergence of heavy-tailed distributions from a wide variety of general processes, building intuition. And it reveals the controversy surrounding heavy tails to be the result of flawed statistics, then equips readers to identify and estimate with confidence. Over 100 exercises complete this engaging package.
This chapter gives a brief overview of Bayesian hypothesis testing. We first describe a standard Bayesian analysis of a single binomial response, going through the prior distribution choice and explaining how the posterior is calculated. We then discuss Bayesian hypothesis testing using the Bayes factor, a measure of how much the posterior odds of believing in one hypothesis changes from the prior odds. We show, using a binomial example, how the Bayes factor may be highly dependent on the prior distribution, even with extremely large sample sizes. We next discuss Bayes hypothesis testing using decision theory, reviewing the intrinsic discrepancy of Bernardo, as well as the loss functions proposed by Freedman. Freedman’s loss functions allow the posterior belief in the null hypothesis to equal the p-value. We next discuss well-calibrated null preferences priors, which applied to parameters from the natural exponential family (binomial, negative binomial, Poisson, normal), also give the posterior belief in the null hypothesis equal to valid one-sided p-values, and give credible intervals equal to valid confidence intervals.
In this chapter, we study risks associated with movements of interest rates in financial markets. We begin with a brief discussion of the term structure of interest rates. We then discuss commonly used interest rate sensitive securities. This is followed by the study of different measures of sensitivity to interest rates, including duration and convexity. We consider mitigating interest rate risk through hedging and immunization. Finally, we take a more in-depth look at the drivers of interest rate term structure dynamics.
The chapter addresses testing when using models. We review linear models, generalized linear models, and proportional odds models, including issues such as checking model assumptions and separation (e.g., when one covariate completely predicts a binary response). We discuss the Neyman–Scott problem, that is, when bias for a fixed parameter estimate can result when the number of nuisance parameters grows with the sample size. With clustered data, we compare mixed effects models and marginal models, pointing out that for logistic regression and other models the fixed effect estimands are different in the two type of models. We present simulations showing that many models may be interpreted as a multiple testing situation, and adjustments should often be made if testing for many effects in a model. We discuss model selection using methods such as Akaike’s information criterion, the lasso, and cross-validation. We compare different model selection processes and their effect on the Type I error rate for a parameter from the final chosen model.
In this chapter, we present the frequency-severity model, which is implicit in common risk calculations used in practice. In this model, the total loss from a risk, or set of risks, is treated as a random sum of random, identically distributed individual losses. If the frequency and severity random variables are independent, then the mean and variance of the aggregate loss can easily be calculated from the moments of the frequency and severity distributions. However, numerical methods are usually required for other metrics, such as quantiles or expected shortfall. We show how to implement these methods and discuss the limitations of this type of model, arising from the independence assumptions.
This chapter covers paired data, such as comparing responses before and after a treatment in one group of individuals. The sign test (also called the exact McNemar’s test when responses are binary) is compared to a median test on the differences of responses within pairs, and we show that the sign test is often more appropriate. We give confidence intervals compatible with the sign test. We discuss parameters associated with the Wilcoxon signed-rank test (often more powerful than the sign test) and assumptions needed to give associated confidence intervals. When we can assume symmetric distribution on the differences within pairs, the t-test is another option, and we discuss asymptotic relative efficiency for choosing between the t-test and Wilcoxon signed-rank test. We compare parameterizing the treatment effect as differences or ratios. We discuss tests using Pearson’s and Spearman’s correlation and Kendal’s tau, and present confidence intervals assuming normality. When the paired data represent different assays or raters, then agreement coefficients are needed (e.g., Cohen’s kappa, or Lin’s concordance correlation coefficient).
In this chapter, we consider qualitative and quantitative aspects of risk related to the development, implementation, and uses of quantitative models in enterprise risk management (ERM). First, we discuss the different ways that model risk arises, including defective models, inappropriate applications, and inadequate or inappropriate interpretation of the results. We consider the lifecycle of a model – from development, through regular updating and revision, to the decommissioning stage. We review quantitative approaches to measuring model and parameter uncertainty, based on a Bayesian framework. Finally, we discuss some aspects of model governance, and some potential methods for mitigating model risk.
This chapter defines statistical hypothesis tests mathematically. Those tests assume two sets of probability models, called the null and alternative hypotheses. A decision rule is a function that depends on the data and a specified ?-level and determines whether or not to reject the null hypothesis. We define concepts related to properties of hypothesis tests such as Type I and II error rates, validity, size, power, invariance, and robustness. The definitions are general but are explained with examples such as testing a binomial parameter, or Wilcoxon–Mann–Whitney tests. P-values are defined as the smallest ?-level for observed data for which we would reject the null at that level and all larger levels. Confidence sets and confidence intervals are defined in relation to a series of hypothesis tests with changing null hypotheses. Compatibility between p-value functions and confidence intervals is defined, and an example with Fisher’s exact test shows that compatibility is not always present for some common tests.
In this chapter, we review the different methods available to a firm that wants to transfer risk. First, we consider the traditional route of insurance, or reinsurance. We describe the different types of insurance contracts, and analyse their advantages and disadvantages. We then consider captive insurance companies, which are insurance companies that are owned by the organization that is transferring risk. Next, we discuss securitization of risk, where risk is packaged into investments that are sold off in the capital markets. One of the most interesting examples of securitized insurance risk is the catastrophe bond, or cat bond. We also look at examples of securitization of demographic risk, through pandemic bonds and longevity derivatives.
This chapter deals with either clustering, where every individual within each cluster has the same treatment, or stratification, where there are individuals with different treatments within each stratum. For the studies with clustering, we compare two individual-level analysis methods (generalized estimating equations and random effects models), and a cluster-level analysis (performing a t-test on the means from each cluster). We simulate cluster analyses when the effect is or is not related to cluster size. In the stratification context, we explore Simpson’s paradox, where the direction of the within stratum effects is different from the direction of the overall effect. We show the appropriate analysis of data that are consistent with Simpson’s paradox should adjust for the strata or not depending on the study design. We discuss the stratification adjusted tests of Mantel and Haenszel, van Elteren, and quasi-likelihood binomial or Poisson models. We compare meta-analysis using fixed effects or random effects (e.g., Dersimonian–Laird method). Finally, we describe confidence intervals for directly standardized rates.
This chapter first describes group sequential methods, where interim tests of a study are done and the study may be stopped either for efficacy (if a large enough early treatment effect is seen) or for futility (if it is unlikely that a treatment effect will be significant if the study goes to completion). We compare two methods for group sequential analysis with equally spaced looks, the Pocock and the O’Brien–Fleming methods, both based on the Brownian motion model. Flexible versions of these methods are developed using alpha spending function approach, where the decision to perform an interim analysis may be based on information independent of the study up to that point. We discuss adjustments when the Brownian motion model assumption does not hold, and estimation and confidence intervals after stopping early. Next, we discuss the Bauer–KÖhne and Proschan–Hunsberger two-stage adaptive methods which bound the Type I error rate. These methods partition the study into two stages. The Stage 1 data allows three decisions (1) stop and declare significance, (2) stop for futility, and (3) continue the study with sample size for the second stage based on the first stage data.
This chapter deals with studies with k groups.When the k groups are unordered, we use k-sample tests, and when the k groups are ordered, we use trend tests.For k-sample tests with categorical responses, we describe the chi-squared test and Fisher’s exact test (also called the Freeman–Halton test). For k-sample tests with numeric responses, we cover one-way ANOVA, studentized range test, and the Kruskal–Wallis test. For the one-way ANOVA and studentized range tests, we give some associated effect parameters and their confidence intervals. For trends tests, we describe the Cochran–Armitage trend test for binary responses, and the Jonckheere–Terpstra test for numeric responses. We discuss familywise error rate when performing follow-up tests of pairwise comparisons in the k-sample studies. When k = 3, after rejecting the k-sample test, then subsequent pairwise tests may be done without correction, but otherwise (k > 3) corrections are necessary for strongly controlling the Type I error rate (e.g., for all pairwise comparisons: Tukey–Kramer, Tukey–Welch, and studentized range procedures, or for many-to-one comparisons: Dunnett’s procedure).