To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Insurance requires modelling tools different from those of the preceding chapter. Pension schemes and life insurance make use of lifecycle descriptions. Individuals start as ‘active’ (paying contributions), at one point they ‘retire’ (drawing benefits) or become ‘disabled’ (benefits again) and they may die. Stochastic models are needed to keep track of what happens, but they cannot be constructed by means of linear relationships like those in the preceding chapter. There are no numerical variables to connect! Distributions are used instead.
The central concept is conditional probabilities and distributions, expressing mathematically that what has occurred is going to influence (but not determine) what comes next. That idea is the principal topic of the chapter. As elsewhere, mathematical aspects (here going rather deep) are downplayed for the conditional viewpoint as a modelling tool. Sequences of states in lifecycles involve time series (but of a different kind from those in Chapter 5) and are treated in Section 6.6. Actually, time may not be involved at all. Risk heterogeneity in property insurance is a typical (and important) example. Consider a car owner. What he encounters daily in the traffic is influenced by randomness, but so is (from a company point of view) his ability as a driver. These are uncertainties of entirely different origin and define a hierarchy (driver comes first). Conditional modelling is the natural way of connecting random effects operating on different levels like this. The same viewpoint is used when errors due to estimation and Monte Carlo are examined in the next chapter, and there are countless other examples.
The hardest part of quantitative risk analysis is to find the stochastic models and judge their realism. This is discussed later. What is addressed now is how models are used once they are in place. Only a handful of probability distributions have been introduced, and yet a good deal can be achieved already. The present chapter is a primer introducing the main arenas and their first treatment computationally. We start with property insurance (an area of huge uncertainty) where core issues can be reached with very simple modelling. Life insurance is quickly reached too, but now something is very different. Once the stochastic model is given, there is little risk left! This doesn't rule out much uncertainty in the model itself, a topic discussed in Section 15.2. With financial risk there is again much randomness under the model assumed.
The target of this chapter is the general line. Many interesting points (demanding heavier modelling) are left out and dealt with later. A unifying theme is Monte Carlo as problem solver. By this we do not mean the computational technique which was treated in the preceding chapter (and in the next one too). What is on the agenda is the art of making the computer work for a purpose, how we arrange for it to chew away on computational obstacles and how it is utilized to get a feel for numbers. Monte Carlo is also an efficient way of handling the myriad of details in practical problems. Feed them into the computer and let simulation take over. Implementation is often straightforward, and existing programs might be reused with minor variations.
The principal tasks in general insurance are solvency and pricing. Solvency is the financial control of liabilities under nearly worst-case scenarios. The target is the so-called reserve; i.e., the upper percentiles qε of the portfolio liability X. Modelling was reviewed in the preceding chapters, and the issue now is computation. We may need the entire distribution of X, and Monte Carlo is the obvious general tool. Some problems can be handled by simpler Gaussian approximations, possibly with a correction for skewness added. Computational methods for solvency are discussed in the next two sections.
The second main topic is the pricing of risk. This has a market side. A company will gladly charge what people are willing to pay! Yet a core is the pure premium π = E(X) or Π = E(X); i.e., the expected policy or portfolio payout during a certain period of time. Evaluations of these are important not only as a basis for pricing, but also as an aid to decision-making. Not all risks are worth taking! Pricing or rating methods follow two main lines. One of them draws on claim histories of individuals. Those with good records are considered lower risk and rewarded (premium reduced), those with bad records are punished (premium raised). The traditional approach is through the theory of credibility, a classic presented in Section 10.5. Price differentials can also be administered according to the experience with groups.
How is evaluation of risk influenced by modern computing? Consider the way we use mathematics, first as a vendor of models of complicated risk processes. These models are usually stochastic. They are in general insurance probability distributions of claim numbers and losses and in life insurance and finance, stochastic processes describing lifecycles and investment returns. Mathematics is from this point of view a language, a way risk is expressed, and it is a language we must master. Otherwise statements of risk cannot be related to reality, it would be impossible to say what conclusions mean in any precise manner and nor could analyses be presented effectively to clients. Actuarial science is in this sense almost untouched by modern computational facilities. The basic concepts and models remain what they were, notwithstanding, of course, the strong growth of risk products throughout the last decades. This development may have had something to do with computers, but not much with computing per se.
However, mathematics is also deductions with precise conclusions derived from precise assumptions through the rules of logic. That is the way mathematics is taught at school and university. It is here that computing enters applied mathematical disciplines like actuarial science. More and more of these deductions are implemented in computers and carried out there. This has been going on for decades. It leans on an endless growth in computing power, a true technological revolution opening up simpler and more general computational methods which require less of users.
The liabilities of the preceding chapter extended over decades, and assets covering them should be followed over decades too, which requires models for equities and the interest-rate curve. Inflation is relevant too, since liabilities might depend on the future wage or price level. This chapter is on the joint and dynamic modelling of such variables. This is a cornerstone when financial risk is evaluated and makes use of linear, normal and heavy-tailed models, stochastic volatility, random walks and stationary stochastic processes. The topic is not elementary; above all it is multivariate. Economic and financial variables influence each other mutually, some of them heavily.
The central models, reviewed below and extended from the treatment in Part I, are huge classes, and here is the real difficulty: Which models to pick in specific situations and what about their parameters? Sources of information are historical data, implications of market positions and even economic and financial analyses and theory. Much of that is beyond the scope of this book, and unlike elsewhere model building through historical data is touched on only briefly. A specific model to work with will be needed in Chapter 15. The most established in actuarial science may be the Wilkie models, set up in a purely empirical way by examining historical data from the last 70 years of the twentieth century; see Wilkie (1995). A major part of it is presented in Sections 13.5 and 13.6.
Can't we simply rely on the available software and avoid what is behind numerical methods altogether? A lot of work can be completed that way with numerical algorithms as black boxes, but if things go wrong you are stuck without knowledge of what's on the inside, and should you become involved in software development, numerical tools play a leading role. For a broad and practically oriented text try Numerical Recipes in C (Press et al., 2007), with sister volumes in C++, Fortran and Pascal, which comes with hundreds of implemented procedures that can be downloaded and used for your own programming.
The purpose of this appendix is the much more modest one of reviewing numerical methods for this book in a relaxed manner which doesn't require prior knowledge of numerical mathematics. A lot can actually be achieved with a handful of elementary methods, and some of them are up to 200 years old! Minimizing or maximizing functions is an exception deserving special mention. Since the days of Newton–Raphs on the world has moved far, and optimization software may work magnificently even when differentiation is carried out numerically. Why is it advantageous to avoid exact calculation of derivatives? Because implementation is often time consuming, and when the function to be optimized is a Monte Carlo simulation it may be impossible. Don't differentiate the function to be optimized to solve equations instead. This is still advocated in some textbooks, but after considerable effort the original problem has been replaced by a more difficult one! Non-linear equations in one variable are easy (Section C.4). Several of them are best avoided (if possible).
The book is organized as a broad introduction to concepts, models and computational techniques in Part I and with general insurance and life insurance/financial risk in Parts II and III. The latter are largely self-contained and can probably be read on their own. Each part may be used as a basis for a university course; we do that in Oslo. Computation is more strongly emphasized than in traditional textbooks. Stochastic models are defined in the way they are simulated in the computer and examined through numerical experiments. This cuts down on the mathematics and enables students to reach ‘advanced’ models quickly. Numerical experimentation is also a way to illustrate risk concepts and to indicate the impact of assumptions that are often somewhat arbitrary. One of the aims of this book is to teach how the computer is put to work effectively.
Other issues are error in risk assessments and the use of historical data, each of which are tasks for statistics. Many of the models and distributions are presented with simple fitting procedures, and there is an entire chapter on error analysis and on the difference between risk under the underlying, real model and the one we actually use. Such error is in my opinion often treated too lightly: we should be very much aware of the distinction between the complex, random mechanisms in real life and our simplified model versions with deviating parameters. In a nebulous and ever-changing world modelling should be kept simple and limited to the essential.
Risk modelling beyond the most elementary requires stochastically dependent variables. The non-linear part of the theory, much needed in insurance, is treated in Chapter 6, and the topic here is linear relationships which are the mainworkhorse for financial risk. Two examples are shownin Figure 5.1. On the left, monthly log-returns on two equity indexes from the NewYork Stock Exchange (NYSE) are scatter plotted for a period of 25 years. They tend to move in the same direction and by related amounts. This is cross-sectional dependence; what happens at the same time influences both simultaneously. The dynamic or longitudinal side is indicated on the right. Equity returns R0:k accumulated over k months are plotted against k. They start at zero (by definition) and then climb steadily until the investments in 2001 were 10–15 times more valuable than at the beginning. A downturn (only partly shown) then set in.
The first part of this chapter concerns cross-sectional dependence with random vectors X = (X1, …, XJ). Models for pairs were treated in Section 2.4, and their scatterplots in Figure 2.5 (look them up!) match the real data in Figure 5.1 left fairly well. It is those models that are now being extended to J variables. They play a main role in longitudinal modelling too, where the setup is a random sequence X1, X2, … with Xk occurring at time tk = kh. The value of the time increment h depends on the application. In long-term finance 1 year is often sufficient, yet much (and important) theoretical modelling applies when h → 0. How models on different time scales are related is discussed at the end of the chapter.
Models describing variation in claim size lack the theoretical underpinning provided by the Poisson point process. The traditional approach is to impose a family of probability distributions and estimate their parameters from historical claims z1, …, zn (corrected for inflation if necessary). Even the family itself is often determined from experience. An alternative with considerable merit is to throw all prior mathematical conditions overboard and rely solely on the historical data. This is known as a non-parametric approach. Much of this chapter is on the use of historical data.
How we proceed is partly dictated by the size of the historical record, and here the variation is enormous. With automobile insurance the number of observations n might be large, providing a good basis for the probability distribution of the claim size Z. By contrast, major incidents in industry (like the collapse of an oil rig) are rare, making the historical material scarce. Such diversity in what there is to go on is reflected in the presentation below. The extreme right tail of the distribution warrants special attention. Lack of historical data where it matters most financially is a challenge. What can be done about it is discussed in Section 9.5.
Actuarial modelling in general insurance is usually broken down on claim size (next chapter) and claim frequency (treated here). Section 3.2 introduced the Poisson distribution as a model for claim numbers. The parameter was λ = μT (for single policies) and λ = JμT (for portfolios) where J was the number of policies, μ the claim intensity and T the time of exposure. Most models for claim numbers are related to the Poisson distribution in some way, and this line has strong theoretical support through the Poisson point process in Section 8.2.
The intensity μ is a vehicle for model extensions. One viewpoint with a long tradition in actuarial science is to regard it as random, either drawn independently for each customer or once as a common parameter for all. Models of that kind were initiated in Section 6.3, and there will be more below. Then there are situations where variations in μ are linked to explanatory factors, such as young drivers being more risky than older ones or earthquakes or hurricanes be more common in certain parts of the world than in others. Risk may also be growing systematically over time or be influenced by the season of the year, as in Figure 8.2 later. Explanatory variables are best treated through Poisson regression, introduced in Section 8.4.
The world of Poisson
Introduction
The world of Poisson is the world of the accidental where incidents, though rare, do occur and independently of each other. Insurance processes are much like that, which suggests they can be lifted into a Poisson framework.