To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This is designed to be an introductory text for a modern course on the fundamentals of probability and information. It has been written to address the needs of undergraduate mathematics students in the ‘new’ universities and much of it is based on courses developed for the Mathematical Methods for Information Technology degree at the Nottingham Trent University. Bearing in mind that such students do not often have a firm background in traditional mathematics, I have attempted to keep the development of material gently paced and user friendly – at least in the first few chapters. I hope that such an approach will also be of value to mathematics students in ‘old’ universities, as well as students on courses other than honours mathematics who need to understand probabilistic ideas.
I have tried to address in this volume a number of problems which I perceive in the traditional teaching of these subjects. Many students first meet probability theory as part of an introductory course in statistics. As such, they often encounter the subject as a ragbag of different techniques without the same systematic development that they might gain in a course in, say, group theory. Later on, they might have the opportunity to remedy this by taking a final-year course in rigorous measure theoretic probability, but this, if it exists at all, is likely to be an option only. Consequently, many students can graduate with degrees in mathematical sciences, but without a coherent understanding of the mathematics of probability.
Since Chapter 5, we have been concerned only with discrete random variables and their applications, that is random variables taking values in sets where the number of elements is either finite or ∞. In this chapter, we will extend the concept of random variables to the ‘continuous’ case wherein values are taken in ℝ or some interval of ℝ.
Historically, much of the motivation for the development of ideas about such random variables came from the theory of errors in making measurements. For example, suppose that you want to measure your height. One approach would be to take a long ruler or tape measure and make the measurement directly. Suppose that we get a reading of 5.7 feet. If we are honest, we might argue that this result is unlikely to be very precise – tape measures are notoriously inaccurate and it is very difficult to stand completely still when you are being measured.
To allow for the uncertainty as to our true height we introduce a random variable X to represent our height, and indicate our hesitancy in trusting the tape measure by assigning a number close to 1 to the probability P(X ∈ (5.6, 5.8)), that is we say that our height is between 5.6 feet and 5.8 feet with very high probability. Of course, by using better measuring instruments, we may be able to assign high probabilities for X lying in smaller and smaller intervals, for example (5.645, 5.665); however, since the precise location of any real number requires us to know an infinite decimal expansion, it seems that we cannot assign probabilities of the form P(X = 5.67).
So far in this book we have tended to deal with one (or at most two) random variables at a time. In many concrete situations, we want to study the interaction of ‘chance’ with ‘time’, e.g. the behaviour of shares in a company on the stock market, the spread of an epidemic or the movement of a pollen grain in water (Brownian motion). To model this, we need a family of random variables (all defined on the same probability space), (X(t), t ≥ 0), where X(t) represents, for example, the value of the share at time t.
(X(t), t ≥ 0) is called a (continuous time) stochastic process or random process. The word ‘stochastic’ comes from the Greek for ‘pertaining to chance’. Quite often, we will just use the word ‘process’ for short.
For many studies, both theoretical and practical, we discretise time and replace the continuous interval [0, ∞) with the discrete set ℤ+ = ℕ ∪ {0} or sometimes ℕ. We then have a (discrete time) stochastic process (Xn, n ∈ ℤ+). We will focus entirely on the discrete time case in this chapter.
Note. Be aware that X(t) and Xt (and similarly X(n) and Xn) are both used interchangeably in the literature on this subject.
There is no general theory of stochastic processes worth developing at this level. It is usual to focus on certain classes of process which have interesting properties for either theoretical development, practical application, or both of these. We will study Markov chains in this chapter.
In this chapter we will be trying to model the transmission of information across channels. We will begin with a very simple model, as is shown in Fig. 7.1, and then build further features into it as the chapter progresses.
The model consists of three components. A source of information, a channel across which the information is transmitted and a receiver to pick up the information at the other end. For example, the source might be a radio or TV transmitter, the receiver would then be a radio or TV and the channel the atmosphere through which the broadcast waves travel. Alternatively, the source might be a computer memory, the receiver a computer terminal and the channel the network of wires and processors which connects them. In all cases that we consider, the channel is subject to ‘noise’, that is uncontrollable random effects which have the undesirable effect of distorting the message leading to potential loss of information by the receiver.
The source is modelled by a random variable S whose values {a1, a2, …, an} are called the source alphabet. The law of S is {p1, p2, …, pn}. The fact that S is random allows us to include within our model the uncertainty of the sender concerning which message they are going to send. In this context, a message is a succession of symbols from S sent out one after the other.
Mobile robots in field environment travel not only on even terrain but also on uneven or sloped terrain. Practical methods for preventing turnover of the mobile robot are essential since the turnover of the mobile robot is very perilous. This paper proposes an efficient algorithm for preventing turnover of a mobile robot on uneven terrain by controlling linear acceleration and rotational velocity of the mobile robot. The concept of the modified zero moment point (ZMP) is proposed for evaluating the potential turnover of the mobile robot. Also, the turnover stability indices for linear acceleration and rotational velocity are defined with the modified ZMP. The turnover stability space (TSS) with turnover stability indices is presented to control the mobile robot in order to avoid turnover effectively. Finally, the feasibility and effectiveness of the proposed algorithm are verified through simulations conducted on a three-wheeled mobile robot.
We present two complementary improvements for abstract-interpretation-based flow analysis of higher-order languages: (1) abstract garbage collection and (2) abstract counting. Abstract garbage collection is an analog to its concrete counterpart: the analysis determines when an abstract resource has become unreachable, and then, re-allocates it as fresh. This prevents flow sets from joining during abstract interpretation, which has two immediate effects: (1) the precision of the interpretation increases and (2) its running time often falls. In abstract counting, the analysis tracks how many times an abstract resource has been allocated. A count of one implies that the abstract resource momentarily represents only one concrete resource. This knowledge, in turn, drives environment analysis, expanding the kind (rather than just the degree) of optimization available to the compiler.
There is no better way to learn than playing. After all, that is how children learn. In this appendix, we are going to provide the basic guidelines for “playing the quantum computing game” with the help of the MATLAB environment.
Reader Tip. This is not a full MATLAB tutorial. We assume that a fully functional version of MATLAB is already installed on your machine and that you know how to start a session, perform some basic calculations, save them, and quit. You should also know what M-files are and how to load them. For a crash brush up, you can read the online tutorial by Math Works: http://www.mathworks.com/academia/student_center/tutorials/launchpad.html.
COMPLEX NUMBERS AND MATRICES
We began this book by saying that complex numbers are fundamental for both quantum mechanics and quantum computing, so we are going to familiarize ourselves with the way they are dealt with in MATLAB.
To begin with, we need to declare complex number variables. This is easy: a complex has a real part and an imaginary part, both double. The imaginary part is declared by using the “i” or “j” character.
Before we formally present quantum mechanics in all its wonders, we shall spend time providing some basic intuitions behind its core methods and ideas. Realizing that computer scientists feel comfortable with graphs and matrices, we shall cast quantum mechanical ideas in graph-theoretic and matrix-theoretic terms. Everyone who has taken a class in discrete structures knows how to represent a (weighted) graph as an adjacency matrix. We shall take this basic idea and generalize it in several straightforward ways. While doing this, we shall present a few concepts that are at the very core of quantum mechanics. In Section 3.1, the graphs are without weights. This will model classical deterministic systems. In Section 3.2, the graphs are weighted with real numbers. This will model classical probabilistic systems. In Section 3.3, the graphs are weighted with complex numbers and will model quantum systems. We conclude Section 3.3 with a computer science/graph-theoretic version of the double-slit experiment. This is perhaps the most important experiment in quantum mechanics. Section 3.4 discusses ways of combining systems to yield larger systems.
Throughout this chapter, we first present an idea in terms of a toy model, then generalize it to an abstract point, and finally discuss its connection with quantum mechanics, before moving on to the next idea.
The machine does not isolate man from the great problems of nature, but plunges him more deeply into them.
Antoine de Saint Exupery, Wind, Sand, and Stars
In this chapter, we discuss a few hardware issues and proposals. Most certainly you have wondered (perhaps more than once!) whether all we have presented up to now is nothing more than elegant speculation, with no practical impact for the real world.
To bring things down to earth, we must address a very basic question: do we actually know how to build a quantum computer?
It turns out that the implementation of quantum computing machines represents a formidable challenge to the communities of engineers and applied physicists. However, there is some hope in sight: quite recently, some simple quantum devices consisting of a few qubits have been successfully built and tested. Considering the amount of resources that have been poured into this endeavor from different quarters (academia, private sector, and the military), it would not be entirely surprising if noticeable progress were made in the near future.
In Section 11.1 we spell out the hurdles that stand in the way, chiefly related to the quantum phenomenon known as decoherence. We also enumerate the wish list of desirable features for a quantum device. Sections 11.2 and 11.3 are devoted to describing two of the major proposals around: the ion trap and optical quantum computers.