To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces probability as a measure of likelihood, which can be placed on a numerical scale running from 0 to 1. Examples are given to show the range and scope of problems that need probability to describe them. We examine some simple interpretations of probability that are important in its development, and we briefly show how the well-known principles of mathematical modelling enable us to progress. Note that in this chapter exercises and problems are chosen to motivate interest and discussion; they are therefore non-technical, and mathematical answers are not expected.
Prerequisites. This chapter contains next to no mathematics, so there are no prerequisites. Impatient readers keen to get to an equation could proceed directly to chapter 2.
PROBABILITY
We all know what light is, but it is not easy to tell what it is.
Samuel Johnson
From the moment we first roll a die in a children's board game, or pick a card (any card), we start to learn what probability is. But even as adults, it is not easy to tell what it is, in the general way.
It is now clear that for most of the interesting and important problems in probability, the outcomes of the experiment are numerical. And even when this is not so, the outcomes can nevertheless often be represented uniquely by points on the line, or in the plane, or in three or more dimensions. Such representations are called random variables. In the preceding chapter we have actually been studying random variables without using that name for them. Now we develop this idea with new notation and background. There are many reasons for this, but the principal justification is that it makes it much easier to solve practical problems, especially when we need to look at the joint behaviour of several quantities arising from some experiment. There are also important theoretical reasons, which appear later.
In this chapter, therefore, we first define random variables, and introduce some new notation that will be extremely helpful and suggestive of new ideas and results. Then we give many examples and explore their connections with ideas we have already met, such as independence, conditioning, and probability distributions. Finally we look at some new tasks that we can perform with these new techniques.
Prerequisites. We shall use some very elementary ideas from calculus; see the appendix to chapter 4.
INTRODUCTION TO RANDOM VARIABLES
In chapter 4 we looked at experiments in which the outcomes in Ω were numbers; that is to say, Ω ⊆ ℝ or, more generally, Ω ⊆ ℝn.
This is a simple and concise introduction to probability and the theory of probability. It considers some of the ways in which probability is motivated by, and applied to, real-life problems in science, medicine, gaming, and other subjects of interest. Probability is inescapably mathematical in character but, as befits a first course, the book assumes minimal prior technical knowledge on the part of the reader. Concepts and techniques are defined and developed as necessary, making the book as accessible and self-contained as possible.
The text adopts an informal tutorial style, with emphasis on examples, demonstrations, and exercises. Nevertheless, to ensure that the book is appropriate for use as a textbook, essential proofs of important results are included. It is therefore well suited to accompany the usual introductory lecture courses in probability. It is intended to be useful to those who need a working knowledge of the subject in any one of the many fields of application. In addition it will provide a solid foundation for those who continue on to more advanced courses in probability, statistics, and other developments. Finally, it is hoped that the more general reader will find this book useful in exploring the endlessly fascinating and entertaining subject of probability.
In the preceding chapter we suggested that a model is needed for probability, and that this model would take the form of a set of rules. In this chapter we formulate these rules. When doing this, we shall be guided by the various intuitive ideas of probability as a relative of proportion that we discussed in Chapter 1. We begin by introducing the essential vocabulary and notation, including the idea of an event. After some elementary calculations, we introduce the addition rule, which is fundamental to the whole theory of probability, and explore some of its consequences.
Most importantly we also introduce and discuss the key concepts of conditional probability and independence. These are exceptionally useful and powerful ideas and work together to unlock many of the routes to solving problems in probability. By the end of this chapter you will be able to tackle a remarkably large proportion of the better-known problems of chance.
Prerequisites. We shall use the routine methods of elementary algebra, together with the basic concepts of sets and functions. If you have any doubts about these, refresh your memory by a glance at appendix II of chapter 1.
NOTATION AND EXPERIMENTS
From everyday experience, you are familiar with many ideas and concepts of probability; this knowledge is gained by observation of lotteries, board games, sport, the weather, futures markets, stock exchanges, and so on. You have various ways of discussing these random phenomena, depending on your personal experience.
Lusin's theorem says that for any measurable real-valued function ƒ, on [0, 1] with Lebesgue measure λ for example, and ε > 0, there is a set A with λ(A) < ε such that restricted to the complement of A, ƒ is continuous. Here [0, 1] can be replaced by any normal topological space and λ by any finite measure μ which is closed regular, meaning that for each Borel measurable set B, μ(B) = sup{μ(F): F closed, F ⊂ B) (RAP, Theorem 7.5.2). Recall that any finite Borel measure on a metric space is closed regular (RAP, Theorem 7.1.3).
Proofs of Lusin's theorem are often based on Egorov's theorem (RAP, Theorem 7.5.1), which says that if measurable functions fn from a finite measure space to a metric space converge pointwise, then for any ε > 0 there is a set of measure less than ε outside of which the fn converge uniformly.
Here, the aim will be to extend Lusin's theorem to functions having values in any separable metric space. The proof of Lusin's theorem in RAP, however, also relied on the Tietze-Urysohn extension theorem, which says that a continuous real-valued function on a closed subset of a normal space can be extended to be continuous on the whole space. Such an extension may not exist for some range spaces: for example, the identity from {0, 1} onto itself doesn't extend to a continuous function from [0, 1] onto {0, 1}; in fact there is no such function since [0, 1] is connected.