To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Risk measures are used to give a numerical value, measuring risk, to a random variable representing losses. In this chapter, we introduce several risk measures, including the two most commonly used in risk management: Value at Risk (VaR) and Expected Shortfall. The risk measures are tested for ‘coherence’ based on a list of properties that have been proposed as desirable for risk measures used in internal and regulatory risk assessment. We consider computational issues – including estimating risk measures – and standard errors from Monte Carlo simulation.
“In this chapter, we consider how individual, univariate distributions can be combined to create multivariate, joint distributions, using copula functions. This can be very valuable when a firm is looking at aggregating dependent risks from different business units. We present Sklar’s seminal theorem, which states that for continuous distributions, every joint distribution can be expressed with a unique copula, and every copula defines a valid joint distribution.
We present some important copulas, both explicit and implicit, and discuss their features. We show how measures of rank dependency can be more informative than traditional correlation. In keeping with our interest in tail behaviour of loss distributions, we consider how different copulas exhibit different dependency in the tails of the marginal distributions.
Finally, we discuss construction and estimation of copulas.”
In this chapter, we discuss stress and scenario tests and testing frameworks. We begin with an introduction to stress testing and a discussion of where stress and scenario testing is most useful, as well as noting some limitations. This is followed by a study of methods for designing and generating stress scenarios. We then discuss regulator stress tests, and illustrate using examples of past failures and successes of real-world stress tests.
In this appendix, we review the major concepts, notation, and results from probability and statistics that are used in this book. We start with univariate random variables, their distributions, moments, and quantiles. We consider dependent random variables through conditional probabilities and joint density and distribution functions. We review some of the distributions that are most important in the text, including the normal, lognormal, Pareto, uniform, binomial, and Poisson distributions. We outline the maximum likelihood (ML) estimation process, and summarize key properties of ML estimators. We review Bayesian statistics, including the prior, posterior, and predictive distributions. We discuss Monte Carlo simulation, with a particular focus on estimation and uncertainty.
In this chapter, we present an overview of enterprise risk management (ERM). We begin by discussing the concepts of risk and uncertainty. We then review some of the more important historical developments in different areas of risk management, propose a definition for ERM, and show how ERM has its origins in all of the individual areas of risk management that came before. We discuss how ERM can be implemented as an ongoing process, which is optimally built into the operations of an organization from the top down, through a risk governance framework. Stages of the ERM cycle include risk identification and analysis, risk evaluation, and risk treatment. Each of these stages is introduced in this chapter, and then developed in more detail in subsequent chapters.
In this chapter, we consider how derivatives can be used as part of a risk treatment plan. First, we look at the use of options to limit asset price risk; then we consider derivatives that protect against adverse movements in interest rates, credit risk, and exchange rates. Finally, we briefly consider how commodity derivatives are used to limit the risk from changing commodity prices.
“In this chapter, we present some key results from extreme value theory (EVT) and illustrate how EVT can be used to supplement traditional statistical analysis. We use EVT when we are concerned about the impact of very rare, very large losses. Because they are rare, we are unlikely to have much data, but using EVT we can infer the extreme tail behaviour of most distributions.
There are two different, but related types of models for extreme value analysis. The first considers the distribution of the maximum value in a random sample of losses. These are called the block maxima models. The second comes from analysing the rare, very large losses, defined as the losses exceeding some high threshold. These are the points over threshold models.
We present the key results for both of these, and show how they are connected. We derive formulas for the VaR and Expected Shortfall risk measures using EVT that are useful when the loss distribution is fat-tailed, and the parameter a is close to 1.0. We use examples throughout to highlight the potential uses in practical applications.”
This chapter gives a brief, mostly nonmathematical, review of statistical hypothesis tests as used for making scientific inferences, with a focus on determining causality. We compare causation and association, reviewing the idea (popularized by Pearl) that probability models alone cannot determine causation, but other aspects of the study design are important for determining causality such as randomization. We differentiate between observational and experimental studies, with the latter much more suited to making causal inferences. We review the example of hormone replacement therapy and its relationship to heart disease in women, where earlier observational studies suggested that therapy reduced heart disease, while later, a large randomized trial showed the opposite. We discuss general issues with designing studies such as having a clear focus of the primary study question to avoid unintentionally exploring multiple hypotheses. We define and review validity, reliability, selection biases, placebo and Hawthorne effects, regression to the mean, blinding, dependence, intention-to-treat analyses, matching and inverse propensity score weighting.
An economic scenario generator (ESG) is a joint model of economic indicators that is designed for use in risk measurement and risk management. Using Monte Carlo simulation, the ESG generates random paths for variables such as interest rates, inflation rates, and stock index movements, designed to reflect both suitable marginal distributions for each variable, and the dependencies between variables. In this chapter, we consider first the decisions that need to be made before the model is constructed or selected. We then present two different frameworks used for ESGs: the cascade design and the vector autoregression design. We consider some of the challenges of model and parameter selection for long-term forecasting, such as structural breaks in the historical data. Finally, we illustrate the application of ESGs with an example involving an employer-sponsored pension plan.
“In this chapter, we consider how traditional measures of profit, using return ratios, can be improved by considering the riskiness of the investment. The objective is to analyse which parts of the business are performing best relative to the risks involved, rather than simply comparing returns with no consideration of risk. In turn, this allows the business to decide where it should take on more risk, as the additional risk is justified by additional return, and where it should take on less.
A key part of the risk budgeting exercise is the allocation of the organization’s economic capital to the individual business units. In the second part of this chapter, we describe different approaches to capital allocation, and assess their relative advantages and disadvantages.”
In this chapter, we discuss how to prepare for and respond to crises. Crises are inevitable, and preparing for crises is an important aspect of risk management. Effective preparation requires the firm to understand best practice in crisis response. We present some examples, good and bad, of corporate and government responses to crises. We discuss the key steps in crisis preparation, followed by the key steps in crisis response. Finally, we consider the impact of corporate structure and ethics.
This chapter covers two-sample studies when the responses are either ordinal or numeric. We present different estimands and their associated tests and confidence intervals. For differences in means, we describe several approaches. Permutation tests and associated confidence intervals need a location shift assumption. We compare several t-tests (Student’s t-test, Welch’s t-test, and the Behrens–Fisher test) focusing on robustness and relaxing normality assumptions. For differences or ratios of medians, we recommend the melding approach because it does not require large samples nor a location shift assumption. Several confidence interval methods related to the Mann–Whitney parameter are discussed, including one compatible with the Wilcoxon–Mann–Whitney test requiring proportional odds assumptions, and the Brunner–Munzel one that is less restrictive. Causal interpretation of the Mann–Whitney parameter is compared with a related parameter, the proportion that benefit on treatment parameter. When power is the driving concern, we compare several tests, and show that the Wilcoxon–Mann–Whitney test is often more powerful than a t-test, even when the data appear approximately normal.
This short chapter formally defines hypothesis tests in terms of decision rules paired with assumptions. It defines when one set of assumptions is more restrictive than another set. It further defines a multiple perspective decision rule, where one decision rule (and hence its p-value function) can be applied under different sets of assumptions, called perspectives.For example, the one sample t-test p-value may be interpreted as testing a mean restricted to the class of normal distributions, but that same p-value is asymptotically valid under an expanded class of distributions that does not require the normality assumption. This multiple perspective decision rule formulation may allow an appropriate interpretation the t-test p-value even when the data are clearly not normally distributed. Another example is presented, giving the Wilcoxon–Mann–Whitney decision rule under two different perspectives.
This chapter provides a brief review of several important ideas in causality. We define potential outcomes, a pair of outcomes for each individual denoting their response if they had gotten treatment and their response if they had gotten control. Typically, we only observe one of the potential outcomes. We define some causal estimands, such as the average causal difference, vaccine efficacy, and the Mann–Whitney parameter. We discuss estimation of the average causal difference from a matched experiment and a randomized study. Using a hypothetical vaccine study, we discuss why causal inference requires more care and assumptions for observational studies than for experiments. We work through a study to estimate the average causal effect on compliers from a randomized study with imperfect compliance. We define principled adjustments for randomized studies. We discuss interference in causality. We review causal analysis with propensity scores for observational studies. We define directed acyclic graphs (DAGs) and show how they can be used to define the backdoor criterion and confounders. Finally, we discuss instrumental variables analysis.
The chapter focuses on inferences on the mean of n independent binary responses. For most applied problems the exact one-sided p-values and the exact central confidence interval (also called the Clopper–Pearson interval) are appropriate. Less common exact confidence intervals (by Sterne or Blaker) may have smaller width at the cost of giving up the centrality (equal error bounds on both sides of the interval). We also discuss mid-p tests and confidence intervals that do not have guaranteed coverage, but instead have coverage that is approximately the nominal level “on average.” Asymptotic methods are briefly described. We discuss three different ways of determining sample size for a one sample study with binary responses: pick the sample size that (1) gives appropriate power to reject the null hypothesis for a particular alternative, (2) gives appropriate power to observe at least one event, or (3) bounds the expected 95% confidence interval width.
We review the difference between real-world and risk-neutral processes. We illustrate asset processes using two years of daily returns from the S\&P 500 stock index, and with 30 years of monthly data from the S\&P/TSX (Toronto Stock Exchange) stock index. We describe three models for modelling asset prices in discrete time: the independent lognormal model, the GARCH model, and the regime-switching lognormal model. We describe how the models are fitted to data, and briefly discuss how to choose between models.
Fay and Brittain present statistical hypothesis testing and compatible confidence intervals, focusing on application and proper interpretation. The emphasis is on equipping applied statisticians with enough tools - and advice on choosing among them - to find reasonable methods for almost any problem and enough theory to tackle new problems by modifying existing methods. After covering the basic mathematical theory and scientific principles, tests and confidence intervals are developed for specific types of data. Essential methods for applications are covered, such as general procedures for creating tests (e.g., likelihood ratio, bootstrap, permutation, testing from models), adjustments for multiple testing, clustering, stratification, causality, censoring, missing data, group sequential tests, and non-inferiority tests. New methods developed by the authors are included throughout, such as melded confidence intervals for comparing two samples and confidence intervals associated with Wilcoxon-Mann-Whitney tests and Kaplan-Meier estimates. Examples, exercises, and the R package asht support practical use.