To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The paper proves transportation inequalities for probability measures on spheres for the Wasserstein metrics with respect to cost functions that are powers of the geodesic distance. Let $\mu$ be a probability measure on the sphere ${\bf S}^n$ of the form $d\mu =e^{-U(x)}{\rm d}x$ where ${\rm d}x$ is the rotation invariant probability measure, and $(n-1)I+{\hbox {Hess}}\,U\geq {\kappa _U}I$, where $\kappa _U>0$. Then any probability measure $\nu$ of finite relative entropy with respect to $\mu$ satisfies ${\hbox {Ent}}(\nu \mid \mu ) \geq (\kappa _U/2)W_2(\nu,\, \mu )^2$. The proof uses an explicit formula for the relative entropy which is also valid on connected and compact $C^\infty$ smooth Riemannian manifolds without boundary. A variation of this entropy formula gives the Lichnérowicz integral.
In this note, we prove two monotonicity formulas for solutions of $\Delta _H f = c$ and $\Delta _H f - \partial _t f = c$ in Carnot groups. Such formulas involve the right-invariant carré du champ of a function and they are false for the left-invariant one. The main results, theorems 1.1 and 1.2, display a resemblance with two deep monotonicity formulas respectively due to Alt–Caffarelli–Friedman for the standard Laplacian, and to Caffarelli for the heat equation. In connection with this aspect we ask the question whether an ‘almost monotonicity’ formula be possible. In the last section, we discuss the failure of the nondecreasing monotonicity of an Almgren type functional.
We prove stronger variants of a multiplier theorem of Kislyakov. The key ingredients are based on ideas of Kislyakov and the Kahane–Salem–Zygmund inequality. As a by-product, we show various multiplier theorems for spaces of trigonometric polynomials on the n-dimensional torus $\mathbb {T}^n$ or Boolean cubes $\{-1,1\}^N$. Our more abstract approach based on local Banach space theory has the advantage that it allows to consider more general compact abelian groups instead of only the multidimensional torus. As an application, we show that various recent $\ell _1$-multiplier theorems for trigonometric polynomials in several variables or ordinary Dirichlet series may be proved without the Kahane–Salem–Zygmund inequality.
We study the symplectic geometry of derived intersections of Lagrangian morphisms. In particular, we show that for a functional $f : X \rightarrow \mathbb {A}_{k}^{1}$, the derived critical locus has a natural Lagrangian fibration $\textbf {Crit}(f) \rightarrow X$. In the case where f is nondegenerate and the strict critical locus is smooth, we show that the Lagrangian fibration on the derived critical locus is determined by the Hessian quadratic form.
In 2012, Andrews and Merca proved a truncated theorem on Euler's pentagonal number theorem. Motivated by the works of Andrews and Merca, Guo and Zeng deduced truncated versions for two other classical theta series identities of Gauss. Very recently, Xia et al. proved new truncated theorems of the three classical theta series identities by taking different truncated series than the ones chosen by Andrews–Merca and Guo–Zeng. In this paper, we provide a unified treatment to establish new truncated versions for the three identities of Euler and Gauss based on a Bailey pair due to Lovejoy. These new truncated identities imply the results proved by Andrews–Merca, Wang–Yee, and Xia–Yee–Zhao.
Beyond quantifying the amount of association between two variables, as was the goal in a previous chapter, regression analysis aims at describing that association and/or at predicting one of the variables based on the other ones. Examples of applications where this is needed abound in engineering and a broad range of industries. For example, in the insurance industry, when pricing a policy, the predictor variable encapsulates the available information about what is being insured, and the response variable is a measure of risk that the insurance company would take if underwriting the policy. In this context, a procedure is solely evaluated based on its performance at predicting that risk, and can otherwise be very complicated and have no simple interpretation. The chapter covers both local methods such as kernel regression (e.g., local averaging) and empirical risk minimization over a parametric model (e.g., linear models fitted by least squares). Cross-validation is introduced as a method for estimating the prediction power of a certain regression or classification metod.
Measurements are often numerical in nature, which naturally leads to distributions on the real line. We start our discussion of such distributions in the present chapter, and in the process introduce the concept of random variable, which is really a device to facilitate the writing of probability statements and the derivation of the corresponding computations. We introduce objects such as the distribution function, survival function, and quantile function, any of which characterizes in the underlying distribution.
Some experiments lead to considering not one, but several measurements. As before, each measurement is represented by a random variable, and these are stacked into a random vector. For example, in the context of an experiment that consists in flipping a coin multiple times, we defined in a previous chapter as many random variables, each indicating the result of one coin flip. These are then concatenated to form a random vector, compactly describing the outcome of the entire experiment. Concepts such as conditional probability and independence are introduced.
We consider an experiment that yields, as data, a sample of independent and identically distributed (real-valued) random variables with a common distribution on the real line. The estimation of the underlying mean and median is discussed at length, and bootstrap confidence intervals are constructed. Tests comparing the underlying distribution to a given distribution (e.g., the standard normal distribution) or a family of distribution (e.g., the normal family of distributions) are introduced. Censoring, which is very common in some clinical trials, is briefly discuss.
In this chapter we introduce some tools for sampling from a distribution. We also explain how to use computer simulations to approximate probabilities and, more generally, expectations, which can allow one to circumvent complicated mathematical derivations. The methods that are introduced include Monte Carlo sampling/integration, rejection sampling, and Markov Chain Monte Carlo sampling.
An expectation is simply a weighted mean, and means are at the core of Probability Theory and Statistics. In Statistics, in particular, such expectations are used to define parameters of interest. It turns out that an expectation can be approximated by an empirical average based on a sample from the distribution of interest, and the accuracy of this approximation can be quantified via what is referred to as concentration inequalities.
An empirical average will converge, in some sense, to the corresponding expectation. This famous result, called the Law of Large Numbers, can be anticipated based on the concentration inequalities introduced in the previous chapter, but some appropriate notions of convergence for random variables need to be defined in order to make a rigorous statement. Beyond mere convergence, the fluctuations of an empirical average around the associated expectation can be characterized by the Central Limit Theorem, and are known to be Gaussian in some asymptotic sense. The chapter also discusses the limit of extremes such as the maximum of a sample.
Stochastic processes model experiments whose outcomes are collections of variables organized in some fashion. We focus here on Markov processes, which include random walks (think of the fortune of a person gambling on black/red at the roulette over time) and branching processes (think of the behavior of a population of an asexual species where each individual gives birth to a number of otherwise identical offsprings according to a given probability distribution) .
In this chapter we consider distributions on the real line that have a discrete support. It is indeed common to count certain occurrences in an experiment, and the corresponding counts are invariably integer-valued. In fact, all the major distributions of this type are supported on the (non-negative) integers. We introduce the main ones here.