To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we review the different methods available to a firm that wants to transfer risk. First, we consider the traditional route of insurance, or reinsurance. We describe the different types of insurance contracts, and analyse their advantages and disadvantages. We then consider captive insurance companies, which are insurance companies that are owned by the organization that is transferring risk. Next, we discuss securitization of risk, where risk is packaged into investments that are sold off in the capital markets. One of the most interesting examples of securitized insurance risk is the catastrophe bond, or cat bond. We also look at examples of securitization of demographic risk, through pandemic bonds and longevity derivatives.
This chapter deals with either clustering, where every individual within each cluster has the same treatment, or stratification, where there are individuals with different treatments within each stratum. For the studies with clustering, we compare two individual-level analysis methods (generalized estimating equations and random effects models), and a cluster-level analysis (performing a t-test on the means from each cluster). We simulate cluster analyses when the effect is or is not related to cluster size. In the stratification context, we explore Simpson’s paradox, where the direction of the within stratum effects is different from the direction of the overall effect. We show the appropriate analysis of data that are consistent with Simpson’s paradox should adjust for the strata or not depending on the study design. We discuss the stratification adjusted tests of Mantel and Haenszel, van Elteren, and quasi-likelihood binomial or Poisson models. We compare meta-analysis using fixed effects or random effects (e.g., Dersimonian–Laird method). Finally, we describe confidence intervals for directly standardized rates.
This chapter introduces MATLAB, aimed at the basic knowledge and skills related to what may be needed in the following chapters for data analysis. This chapter, however, is far from a complete coverage of MATLAB; nor do we need everything provided by MATLAB. For those who are familiar with MATLAB already, this chapter may be either skipped or used as a quick review. The exercises at the end of the chapter may be useful for some data processing, e.g. the selection of a subset of dataset is often needed, and the MATLAB function find is particularly useful for that.
This chapter first describes group sequential methods, where interim tests of a study are done and the study may be stopped either for efficacy (if a large enough early treatment effect is seen) or for futility (if it is unlikely that a treatment effect will be significant if the study goes to completion). We compare two methods for group sequential analysis with equally spaced looks, the Pocock and the O’Brien–Fleming methods, both based on the Brownian motion model. Flexible versions of these methods are developed using alpha spending function approach, where the decision to perform an interim analysis may be based on information independent of the study up to that point. We discuss adjustments when the Brownian motion model assumption does not hold, and estimation and confidence intervals after stopping early. Next, we discuss the Bauer–KÖhne and Proschan–Hunsberger two-stage adaptive methods which bound the Type I error rate. These methods partition the study into two stages. The Stage 1 data allows three decisions (1) stop and declare significance, (2) stop for futility, and (3) continue the study with sample size for the second stage based on the first stage data.
The objective of this chapter is to discuss some background information of tides and the idea, purpose, and method of harmonic analysis of tides. Harmonic analysis is a special application of the least squares method to tidal signals. A list of 37 major tidal frequencies is provided. The basic theory and an example for the analysis is presented. The time origin of expression of tidal time series and longer-term variation of tidal constituents are discussed. A concise equilibrium tidal theory is included at the end of the chapter for reference.
This chapter deals with studies with k groups.When the k groups are unordered, we use k-sample tests, and when the k groups are ordered, we use trend tests.For k-sample tests with categorical responses, we describe the chi-squared test and Fisher’s exact test (also called the Freeman–Halton test). For k-sample tests with numeric responses, we cover one-way ANOVA, studentized range test, and the Kruskal–Wallis test. For the one-way ANOVA and studentized range tests, we give some associated effect parameters and their confidence intervals. For trends tests, we describe the Cochran–Armitage trend test for binary responses, and the Jonckheere–Terpstra test for numeric responses. We discuss familywise error rate when performing follow-up tests of pairwise comparisons in the k-sample studies. When k = 3, after rejecting the k-sample test, then subsequent pairwise tests may be done without correction, but otherwise (k > 3) corrections are necessary for strongly controlling the Type I error rate (e.g., for all pairwise comparisons: Tukey–Kramer, Tukey–Welch, and studentized range procedures, or for many-to-one comparisons: Dunnett’s procedure).
A taxonomy is a classification system. In this chapter, we present a risk taxonomy, by which we mean that we shall categorize and describe all the major risks that may be faced by a firm or institution. We will describe risks that arise from outside the organization (external risks) and those that come from within the organization (internal risks). External risks are further categorized into economic, political, and environmental categories, while internal risks include operational and strategic risks. Reputational risk may be internally or externally generated. We describe some examples of how risks have arisen in several high-profile cases, showing the intersectionality of the different risk categories – that is, how the different risk types can all be driven by a single risk event.
In this chapter, we discuss the ways that credit risk arises, and how it can be modelled and mitigated. First, we consider the various types of contractual forms for loans and other obligations. We then discuss credit derivatives, which are contracts with payoffs that are contingent on credit events. We consider credit risk models based on the three fundamental components: probability of default, proportionate loss given default, and exposure at default. We consider models of default for individual firms, including the role of credit rating agencies, structural models, which are based on the underlying processes causing default, and reduced form models which are more based on the empirical information, with less emphasis on the underlying story. This is followed by a description of portfolio credit risk models, where the joint credit risk of multiple entities is the modelling objective.
This chapter discusses censored time-to-event data. We review and define right-censored and interval-censored data and common assumptions associated with them, focusing on standard cases when the independent censoring assumption holds. We define the Kaplan–Meier estimator, the nonparametric maximum likelihood estimator (NPMLE) for the survival distribution for right censored data. We describe the beta product confidence procedure which gives pointwise confidence intervals for it, with better coverage than the standard Greenwood intervals. We describe the NPMLE for the survival distribution for interval censored data using the E-M algorithm. We compare the proportional hazards or proportional odds models. For both right- and interval-censored data, we describe the score tests from the proportional hazards or odds models, and show they are different forms of weighted logrank tests. We cover testing the difference in survival distributions at a specific point in time. We discuss issues with interpreting the proportional hazards model causally, showing that a model with individual proportional hazards does not equal the usual population proportional hazards model.
As mentioned earlier, time series data must include time stamps. It may seem trivial, but some attentions are needed to properly use time to avoid mistakes. The objective of this chapter is to review a few concepts of time so that when an analysis of time series data is performed, there is less chance to make mistakes with respect to data consistency, the result of analysis, and interpretation. We will discuss some basic astronomical concepts related to time; different definitions of day; and time measurements, GMT, and UTC. We will learn using MATLAB to construct a time sequence from civil time or time strings, i.e. the year, month, day, hour, minute, and second to a real number of time and vice versa. We will also briefly discuss the Positioning, Navigation, and Timing (PNT) data from the Global Positioning System (GPS).
The chapter focuses on two-sample studies with binary responses, mostly on the case where each sample has an independent binomial response. We discuss three parameters of interest based on the functions of the two binomial parameters: the difference, ratio, or odds ratio of the two parameters. The difference and odds ratio have symmetry equivariance, but the ratio does not. The odds ratio is useful for case-control studies. We compare two versions of the two-sided Fisher’s exact test, and recommend the central one. We describe compatible confidence intervals with the Fisher’s exact test using any of the three parameters of interest. Unconditional exact tests generally have more power than conditional ones, such as Fisher’s exact test, but are computationally more complicated. We recommend a modified Boschloo unconditional exact test with associated confidence intervals to have good power. We discuss the Berger–Boos adjustment, and mid-p methods. We compare several methods with respect to confidence interval coverage. We end with a different study design used with COVID-19 vaccines, where the number of total events is fixed in advance.
The objective of this chapter is to review the Taylor series expansion and discuss its usage in error estimation. The unique value of Taylor series expansion is often neglected. The major assumption is that a function must be infinitely differentiable to use the Taylor series expansion. In real applications in oceanography, however, hardly there is a need to worry about a derivative higher than the 3rd order, although one may think of some exceptions. The point is, there is rarely a need in oceanography and other environmental sciences to actually consider calculating a very high order derivative, unless for theoretical investigations or under special situations. So the application of Taylor series expansion usually only involves the first two derivatives. In this chapter, some simple examples are included for a better understanding of the applications.
The objective of this chapter is to introduce rotary spectrum analysis for velocity vector time series. When the two components of a velocity vector have different frequencies, the tip of the displacement vector would draw a figure called a Lissajous Figure. A special case of the Lissajous Figure is when the two components oscillate at the same frequency. Vector time series at a given frequency can only have a few basic patterns or a combination of these patterns: the tip of the vector would draw a line segment back and forth repeatedly, or rotate either clockwise or counterclockwise. This makes it necessary to study the rotary spectra for rotations in both directions. A rectilinear motion is a degenerated version or special case of rotary motion.
In this chapter, we discuss some of the common psychological or behavioural factors that influence risk analysis and risk management. We give examples of cases where behavioural biases created a risk management failure, and some ways in which the negative impact of biases can be mitigated. Biases are categorized, loosely, as relating to (i) self-deception, (ii) information processing (both forms of cognitive bias), and (iii) social bias, relating to the pressures created by social norms and expectations. We give examples of a range of common behavioural biases in risk management, and we briefly describe some strategies for overcoming the distortions created by behavioural factors in decision-making. Next, we present the foundational concepts of Cumulative Prospect Theory, which provides a mathematical framework for decision making that reflects some universal cognitive biases.
In this chapter, we review some of the risk management implications of the regulation of banks and insurance companies. Banks are largely regulated through local implementation of the Basel II and Basel III Accords. Insurance regulation is more varied, but the development of the Solvency II framework in the European Union has influenced regulation more widely, and so we focus on Solvency II as an example of a modern insurance regulatory system.
In this chapter we begin with definitions of standard missing data assumptions: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). Under MNAR, the probability that a response is missing may depend on the missing data values. For example, if the response is death, if individuals drop out of the study if they are very sick, and if we do not or cannot measure the variables that indicate which individuals are very sick, then that is MNAR. In the MNAR case, we consider several sensitivity analysis methods: worse case imputation, opposite arm imputation, and tipping point analysis. The tipping point analysis changes the imputed missing data systematically until the inferential results change (e.g., from significant to not significant). In the MAR case, we consider in a very simple case models such as regression imputation and inverse probability weighted estimators. We simulate two scenarios (1) when the MAR model is correctly specified, and (2) when the MAR model is misspecified. Finally, we briefly describe multiple imputation for missing data in a simple MAR scenario.
This chapter addresses multiplicity in testing, the problem that if many hypotheses are tested then unless some multiplicity adjustment is made, the probability of falsely rejecting at least one hypothesis can be unduly high. We define familywise error rate (FWER), the probability that at least one null hypothesis in the family of hypotheses is rejected. We discuss which sets of hypotheses should be grouped into families. We define the false discovery rate (FDR). We describe simple adjustments based only on p-values of the hypotheses in the family, such the Bonferroni, Holm, and Hochberg procedures for FWER control and the Benjamini–Hochberg adjustment for FDR control. We discuss max-t type inferences for controlling FWER in linear models, or other models with asymptotically normal estimators. We describe resampling-based multiplicity adjustments. We demonstrate graphical methods, showing, for example, gatekeeping and fallback methods, and allowing for more complicated methods. We briefly present logical constraints for hypotheses and the theoretically important closure method.