To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter briefly covers many general methods for calculating p-values and confidence intervals. We discuss likelihood ratios for inferences with a one-dimensional parameter. Pivot functions are defined (e.g., the probability integral transformation). Basic results for normal and asymptotic normal inferences are given, such as some central limit theorems and the delta method. Three important likelihood-based asymptotic methods (the Wald, score, and likelihood ratio test) are defined and compared. We describe the sandwich method for estimating variance, which requires fewer assumptions than the likelihood-based methods. General permutation tests are presented, along with implementation descriptions including equivalent forms, permutational central limit theorem, and Monte Carlo methods. The nonparametric bootstrap is described, as well as some bootstrap confidence interval methods such as the BCa methods. The melding method of combining two confidence intervals is described, which gives an automatically efficient method to calculate confidence intervals for the differences or ratios of two parameters. Finally, we discuss within-cluster resampling.
The objective of this chapter is to discuss a very important issue of the effect of finite sampling with respect to either the finite length of the record or the finite sampling intervals. A few sampling theorems are discussed.
The chapter covers inferences with ordinal or numeric responses, with the focus on medians or means. We discuss choosing between the mean and median for describing the central tendency. We give an exact test and associated exact central confidence interval for the median that is applicable without making assumptions on the distribution. For the mean, we show the need for making some restrictive assumptions on the class of distributions for testing the mean, otherwise tests on the mean are not possible. We discuss the one-sample t-test, and how with the normality assumption it is uniformly most powerful unbiased test. We show through some asymptotic results and simulations that with less restrictive assumptions the t-test can still be approximately valid. By simulation, we compare the t-test to some bootstrap inferential methods on the mean, suggesting that the bootstrap-t interval is slightly better for skewed data. We discuss making inferences on rate or count data after making either Poisson or overdispersed Poisson assumptions on the counts. Finally, we discuss testing the variance, standard deviation, or coefficient of variation under certain normality assumptions.
In this chapter, we distinguish funding liquidity from market liquidity, and idiosyncratic liquidity from systemic liquidity. We discuss the nature of highly liquid assets, and methods by which a firm might acquire liquid assets to cover short-term cash flow problems, either in normal operations, or in more extreme crises. As liquidity risk is a problem of cash flow management, we explain how cash flow scenario tests can be used to identify and mitigate risks. We describe liquidity adjusted risk measures used in banking. Finally, we describe how firms might create emergency plans for managing extreme and unexpected liquidity shocks.
This chapter first focuses on goodness-of-fit tests. A simple case is testing for normality (e.g., the Shapiro–Wilks test). We generally recommend against this because large sample sizes can find statistically significant differences even if those differences are not important, and vice versa. We show Q-Q plots to graphically check for the largeness of departures from normality. We discuss the Kolmogorov–Smirnoff test for any difference between two distributions. We review goodness-of-fit tests for contingency tables (Pearson’s chi-squared test and Fisher’s exact test) and for logistic regression (the Hosmer–Lemeshow test). The rest of the chapter is devoted to equivalence or noninferiority tests. The margin of equivalence or noninferiority must be prespecified, and for noninferiority tests of a new drug against a standard, the margin should be larger than the difference between the placebo and the standard. We discuss the constancy assumption and biocreep. We note that while poor design (poor compliance, poor study population choice, poor measurement) generally decreases power in superiority design, these can lead to high Type I error rates in noninferiority designs.
This chapter handles power and sample size estimation used for study design. A very flexible method to estimate power is to simulate. Binomial confidence intervals can be used on the simulated power estimates to determine the number of simulations needed for the precision desired. To determine sample size by simulation, we introduce an algorithm based on methods for dose finding studies, that does not need a large number of replications at each different sample size tried. We present a general normal theory approximation, to give approximate sample sizes for designs that use simple tests such as the two-sample t-test, and the two-sample difference in binomial proportions test. We generalize to cases with unequal allocation between arms, or more complicated tests such as the logrank test. We discuss modifications to sample sizes for nonadherence or missing data.
This chapter discusses some basic concepts quantifying the characteristics of functions with random fluctuations. Deterministic functions are discussed first in order to introduce functions with randomness and ways to quantify them. The concepts of phase space, ensemble mean, ergodic process, moments, covariance functions, and correlation functions are discussed briefly.
Harmonic analysis and Fourier analysis are fundamental tools for oceanographic time series data analysis. Both can be derived from the least squares method. They are different, however, in one major respect: Fourier analysis is based on a complete set of base functions, such that the convergence of relevant Fourier series is guaranteed for continuous functions. In contrast, harmonic analysis almost always has non-zero total error squared unless for pure deterministic functions with tidal frequencies only. This chapter provides additional discussion and some examples aimed at a better understanding of the concepts and techniques. The discussion will involve tidal harmonic analysis and Fourier analysis by contrasting them in concept and through some examples for harmonic analysis.
In this chapter, we describe some numerical methods used for calculating VaR and Expected Shortfall for losses related to investment portfolios, measured over short time horizons – typically 10 days or less. These are techniques commonly used for regulatory capital calculations under Basel III. We start with simple portfolios investments, and then add derivatives. We review the covariance approach, the delta-normal approach, and the delta-gamma-normal approach to portfolio risk measures. Each of these approaches ultimately uses a normal approximation to the distribution of the portfolio value. We also consider the use of historical simulation, based on the empirical distribution of asset prices over the recent past. Finally, we discuss backtesting the risk measure distributions. Backtesting is required under the Basel regulations.
The chapter begins with a cautionary story of a study on hydroxychloroquine treatment for COVID-19. We detail weaknesses in the study that led to misinterpretation of its results, emphasizing that there is more to proper application of hypothesis tests than calculating a p-value. We then discuss reproducibility in science and the p-value controversy. Some argue that since p-values are often misunderstood and misused that they should be replaced with other statistics. We counter that point of view, arguing that frequentist hypothesis tests, when properly applied, are well suited to address reproducibility issues. This motivates the book which provides guiding principles and tools for designing studies and properly applying hypothesis testing for many different scientific applications. We agree with many of the concerns about overreliance on p-values, hence the approach of the book is to present not just methods for hypothesis tests, but also methods for compatible confidence intervals on parameters that can accompany them. The chapter ends with an overview of the book, describing its level and the intended audience.
The objective of this chapter is to discuss digital filters. We start from a review of theory of Fourier Transform for continuous functions. The continuous Fourier Transform is then discretized. The discretized Fourier Transform and inverse Fourier Transform, however, are not approximate equations – they are exact. Using the shifting theorem, a filter can easily be expressed in the frequency domain. A Finite Impulse Response (FIR) filter is then defined. By adding another implicit convolution to the original convolution for FIR filter, the filtered data depends on not only the input (the original time series) but also the output (the filtered data). This is an iterative relation that forms the Infinite Impulse Response (IIR) filter. These filters are examples of so-called linear systems that have an input and output. The gain is defined by the filter, which is the ratio between the input and output in the frequency domain. Several FIR and IIR filter functions in MATLAB are discussed.
Risk measures are used to give a numerical value, measuring risk, to a random variable representing losses. In this chapter, we introduce several risk measures, including the two most commonly used in risk management: Value at Risk (VaR) and Expected Shortfall. The risk measures are tested for ‘coherence’ based on a list of properties that have been proposed as desirable for risk measures used in internal and regulatory risk assessment. We consider computational issues – including estimating risk measures – and standard errors from Monte Carlo simulation.
This chapter discusses a generic least squares method and a special situation when the base functions are orthogonal to each other, which makes the solution explicit; in addition, we learn that the essence of the least squares method can be viewed as a way to project the target function in a higher dimension onto a lower dimension formed by the base functions. The least squares method ensures that the error vector is “perpendicular” to the projected (or approximate) vector in the base function dimension (a lower dimension) and thus has the shortest “length” or minimized error. Although this chapter does not have much computation involved, it is very important for a good understanding of the meaning of many techniques and methods in the subsequent chapters.