To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Stochastic processes constitute a branch of probability theory treating probabilistic systems that evolve in time. There seems to be no very good reason for trying to define stochastic processes precisely, but as we hope will become evident in this chapter, there is a very good reason for trying to be precise about probability itself. Those particular topics in which evolution in time is important will then unfold naturally. Section 1.5 gives a brief introduction to one of the very simplest stochastic processes, the Bernoulli process, and then Chapters 2, 3, and 4 develop three basic stochastic process models which serve as simple examples and starting points for the other processes to be discussed later.
Probability theory is a central field of mathematics, widely applicable to scientific, technological, and human situations involving uncertainty. The most obvious applications are to situations, such as games of chance, in which repeated trials of essentially the same procedure lead to differing outcomes. For example, when we flip a coin, roll a die, pick a card from a shuffled deck, or spin a ball onto a roulette wheel, the procedure is the same from one trial to the next, but the outcome (heads (H) or tails (T) in the case of a coin, 1 to 6 in the case of a die, etc.) varies from one trial to another in a seemingly random fashion.
This text has evolved over some 20 years, starting as lecture notes for two first-year graduate subjects at MIT, namely, Discrete Stochastic Processes (6.262) and Random Processes, Detection, and Estimation (6.432). The two sets of notes are closely related and have been integrated into one text. Instructors and students can pick and choose the topics that meet their needs, and suggestions for doing this follow this preface.
These subjects originally had an application emphasis, the first on queueing and congestion in data networks and the second on modulation and detection of signals in the presence of noise. As the notes have evolved, it has become increasingly clear that the mathematical development (with minor enhancements) is applicable to a much broader set of applications in engineering, operations research, physics, biology, economics, finance, statistics, etc.
The field of stochastic processes is essentially a branch of probability theory, treating probabilistic models that evolve in time. It is best viewed as a branch of mathematics, starting with the axioms of probability and containing a rich and fascinating set of results following from those axioms. Although the results are applicable to many areas, they are best understood initially in terms of their mathematical structure and interrelationships.
Applying axiomatic probability results to a real-world area requires creating a probability model for the given area. Mathematically precise results can then be derived within the model and translated back to the real world. If the model fits the area sufficiently well, real problems can be solved by analysis within the model.
A Poisson process is a simple and widely used stochastic process for modeling the times at which arrivals enter a system. It is in many ways the continuous-time version of the Bernoulli process. Section 1.4.1 characterized the Bernoulli process by a sequence of independent identically distributed (IID) binary random variables (rvs), Y1, Y2, …, where Yi = 1 indicates an arrival at increment i and Yi = 0 otherwise. We observed (without any careful proof) that the process could also be characterized by a sequence of geometrically distributed interarrival times.
For the Poisson process, arrivals may occur at arbitrary positive times, and the probability of an arrival at any particular instant is 0. This means that there is no very clean way of describing a Poisson process in terms of the probability of an arrival at any given instant. It is more convenient to define a Poisson process in terms of the sequence of interarrival times, X1, X2,…, which are defined to be IID. Before doing this, we describe arrival processes in a little more detail.
Arrival processes
An arrival process is a sequence of increasing rv s, 0 < S1 < S2 < …, where Si < Si+l means that Si+i – Si is a positive rv, i.e., a rv X such that FX(0) = 0. The rv s S1, S2,…are called arrival epochs and represent the successive times at which some random repeating phenomenon occurs. Note that the process starts at time 0 and that multiple arrivals cannot occur simultaneously (the phenomenon of bulk arrivals can be handled by the simple extension of associating a positive integer rv to each arrival). We will sometimes permit simultaneous arrivals or arrivals at time 0 as events of zero probability, but these can be ignored.
Detection, decision making, and hypothesis testing are different names for the same procedure. The word detection refers to the effort to decide whether some phenomenon is present or not in a given situation. For example, a radar system attempts to detect whether or not a target is present; a quality control system attempts to detect whether a unit is defective; a medical test detects whether a given disease is present. The meaning has been extended in the communication field to detect which one, among a finite set of mutually exclusive possible transmitted signals, has been transmitted. Decision making is, again, the process of choosing between a number of mutually exclusive alternatives. Hypothesis testing is the same, except the mutually exclusive alternatives are called hypotheses. We usually use the word hypotheses for these alternatives in what follows, since the word seems to conjure up the appropriate intuitive images.
These problems will usually be modeled by a generic type of probability model. Each such model is characterized by a discrete random variable (rv) X called the hypothesis rv and a random vector (rv) Y called the observation rv. The observation might also be one-dimensional, i.e., an ordinary rv. The sample values of X are called hypotheses; it makes no difference what these hypotheses are called, so we usually number them, 0,1,…, M — 1.
The counting processes {N(t); t > 0} described in Section 2.1.1 have the property that N(t) changes at discrete instants of time, but is defined for all real t > 0. The Markov chains to be discussed in this chapter are stochastic processes defined only at integer values of time, n = 0,1,…. At each integer time n ≥ 0, there is an integer-valued random variable (rv) Xn, called the state at time n, and the process is the family of rv s {Xn; n ≥ 0}. We refer to these processes as integer-time processes. An integer-time process {Xn; n ≥ 0} can also be viewed as a process {X(t); t ≥ 0} defined for all real t by taking X(t) = Xn for n ≤ t ≤ n + 1, but since changes occur only at integer times, it is usually simpler to view the process only at those integer times.
In general, for Markov chains, the set of possible values for each rv Xn is a countable set S. If S is countably infinite, it is usually taken to be S = {0,1,2,…}, whereas if S is finite, it is usually taken to be S = {1,…, M}. In this chapter (except for Theorems 4.2.8 and 4.2.9), we restrict attention to the case in which S is finite, i.e., processes whose sample functions are sequences of integers, each between 1 and M. There is no special significance to using integer labels for states, and no compelling reason to include 0 for the countably infinite case and not for the finite case. For the countably infinite case, the most common applications come from queueing theory, where the state often represents the number of waiting customers, which might be zero. For the finite case, we often use vectors and matrices, where positive integer labels simplify the notation. In some examples, it will be more convenient to use more illustrative labels for states.
Bks. 5 and 6 of Hdt., which contain the narratives of the Ionian revolt and the Marathon campaign, are central to the Histories. The present section will, after some remarks about book divisions, discuss the structure of bks. 5 and 6, both internally and in relation to the Histories as a whole.
The first requirement is to think away the conventional book divisions altogether; such divisions seem generally to be a fourth-century innovation. There is no good reason to think that Herodotus divided his own work into nine books (unlike, say, Polybius, he does not use them himself to cross-refer or rather back-refer). The Herodotean division is probably Alexandrian i.e. Hellenistic, perhaps third or fourth century bc. We must distinguish two questions: who first says that Hdt.'s work was in nine books, and who first cited him by book number.
The ‘chronographic’ source of Diodorus, which provided him with some good-quality historiographic and poetic dates, as well as king lists and dates of city-foundations and mergers (synoikisms), tells us that Hdt.’s work was in nine books.4 Diodorus himself wrote in the time of Julius Caesar or the early years of Augustus’ principate, but the chronographer worked in perhaps the second century bc, the time of Apollodoros the Chronicler (FGrHist 244). Apollodoros is, however, an unlikely candidate himself, as is Kastor of Rhodes (FGrHist 250), whose chronicle ended with Pompey’s triumph in 61 bc. It is better to leave the Diodoran chronographer without a name.