To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Until now, we have typically talked about the mean, variance, or higher moments of a random variable (r.v.). In this chapter, we will be concerned with the tail probability of a r.v. , specifically.
In the last chapter we studied randomized algorithms of the Las Vegas variety. This chapter is devoted to randomized algorithms of the Monte Carlo variety.
In Part I, we saw that experiments are classified as either having a discrete sample space, with a countable number of possible outcomes, or a continuous sample space, with an uncountable number of possible outcomes. In this part, our focus will be on the discrete world. In Part III we will focus on the continuous world.
This chapter is a very brief introduction to the wonderful world of transforms. Transforms come in many varieties. There are z-transforms, moment-generating functions, characteristic functions, Fourier transforms, Laplace transforms, and more. All are very similar in their function. In this chapter, we will study z-transforms, a variant particularly well suited to common discrete random variables. In Chapter 11, we will study Laplace transforms, a variant ideally suited to common continuous random variables.
Having covered how to generate random variables in the previous chapter, we are now in good shape to move on to the topic of creating an event-driven simulation. The goal of simulation is to predict the performance of a computer system under various workloads. A big part of simulation is modeling the computer system as a queueing network.
In this first part of the book we focus on some basic tools that we will need throughout the book. We start, in Chapter 1, with a review of some mathematical basics: series, limits, integrals, counting, and asymptotic notation. Rather than attempting an exhaustive coverage, we instead focus on a select “toolbox” of techniques and tricks that will come up over and over again in the exercises throughout the book. Thus, while none of this chapter deals with probability, it is worth taking the time to master its contents.
Until now we have only studied discrete random variables. These are defined by a probability mass function (p.m.f.). This chapter introduces continuous random variables, which are defined by a probability density function.
The focus until now in the book has been on probability. We can think of probability as defined by a probabilistic model, or distribution, which governs an “experiment,” through which one generates samples, or events, from this distribution. One might ask questions about the probability of a certain event occurring, under the known probabilistic model.
At this point in our discussion of discrete-time Markov chains (DTMCs) with states, we have defined the notion of a limiting probability of being in state
In Chapter 18 we saw several powerful tail bounds, including the Chebyshev bound and the Chernoff bound. These are particularly useful when bounding the tail of a sum of independent random variables. We also reviewed the application of the Central Limit Theorem (CLT) to approximating the tail of a sum of independent and identically distributed (i.i.d.) random variables.
This book assumes some mathematical skills. The reader should be comfortable with high school algebra, including logarithms. Basic calculus (integration, differentiation, limits, and series evaluation) is also assumed, including nested (3D) integrals and sums. We also assume that the reader is comfortable with sets and with simple combinatorics and counting (as covered in a discrete math class).
In Chapter 15, we focused on estimating the mean and variance of a distribution given observed samples. In this chapter and the next, we look at the more general question of statistical inference, where this time we are estimating the parameter(s) of a distribution or some other quantity. We will continue to use the notation for estimators given in Definition 15.1.
In Chapter 4 we devoted a lot of time to computing the expectation of random variables. As we explained, the expectation is useful because it provides us with a single summary value when trading off different options. For example, in Example 4.1, we used the “expected earnings” in choosing between two startups.
We prove that for every tree $T$ of radius $h$, there is an integer $c$ such that every $T$-minor-free graph is contained in $H\boxtimes K_c$ for some graph $H$ with pathwidth at most $2h-1$. This is a qualitative strengthening of the Excluded Tree Minor Theorem of Robertson and Seymour (GM I). We show that radius is the right parameter to consider in this setting, and $2h-1$ is the best possible bound.
This concise and self-contained introduction builds up the spectral theory of graphs from scratch, with linear algebra and the theory of polynomials developed in the later parts. The book focuses on properties and bounds for the eigenvalues of the adjacency, Laplacian and effective resistance matrices of a graph. The goal of the book is to collect spectral properties that may help to understand the behavior or main characteristics of real-world networks. The chapter on spectra of complex networks illustrates how the theory may be applied to deduce insights into real-world networks.
The second edition contains new chapters on topics in linear algebra and on the effective resistance matrix, and treats the pseudoinverse of the Laplacian. The latter two matrices and the Laplacian describe linear processes, such as the flow of current, on a graph. The concepts of spectral sparsification and graph neural networks are included.
This concise and self-contained introduction builds up the spectral theory of graphs from scratch, with linear algebra and the theory of polynomials developed in the later parts. The book focuses on properties and bounds for the eigenvalues of the adjacency, Laplacian and effective resistance matrices of a graph. The goal of the book is to collect spectral properties that may help to understand the behavior or main characteristics of real-world networks. The chapter on spectra of complex networks illustrates how the theory may be applied to deduce insights into real-world networks.
The second edition contains new chapters on topics in linear algebra and on the effective resistance matrix, and treats the pseudoinverse of the Laplacian. The latter two matrices and the Laplacian describe linear processes, such as the flow of current, on a graph. The concepts of spectral sparsification and graph neural networks are included.
This concise and self-contained introduction builds up the spectral theory of graphs from scratch, with linear algebra and the theory of polynomials developed in the later parts. The book focuses on properties and bounds for the eigenvalues of the adjacency, Laplacian and effective resistance matrices of a graph. The goal of the book is to collect spectral properties that may help to understand the behavior or main characteristics of real-world networks. The chapter on spectra of complex networks illustrates how the theory may be applied to deduce insights into real-world networks.
The second edition contains new chapters on topics in linear algebra and on the effective resistance matrix, and treats the pseudoinverse of the Laplacian. The latter two matrices and the Laplacian describe linear processes, such as the flow of current, on a graph. The concepts of spectral sparsification and graph neural networks are included.