To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Because they are not needed earlier, conditional expectations do not appear until Chapter 5. The advantage gained by this postponement is that, by the time I introduce them, I have an ample supply of examples to which conditioning can be applied; the disadvantage is that, with considerable justice, many probabilists feel that one is not doing probability theory until one is conditioning. Be that as it may, Kolmogorov’s definition is given in §5.1 and is shown to extend naturally to both σ-finite measure spaces and random variables with values in a Banach space. Section 5.2 presents Doob’s basic theory of real-valued, discrete parameter martingales: Doob’s Inequality, his Stopping Time Theorem, and his Martingale Convergence Theorem. In the last part of §5.2, I introduce reversed martingales and apply them to DeFinetti’s theory of exchangeable random variables.
The central topic here is the abstract theory of weak convergence of probability measures on a Polish space. The basic theory is developed in §9.1. In §9.2 I apply the theory to prove the existence of regular conditional probability distributions, and in §9.3 I use it to derive Donsker’s Invariance Principle (i.e., the pathspace statement of the Central Limit Theorem).
Chapter 4 reviews frequently used machine learning evaluation procedures. In particular, it presents popular evaluation metrics for binary and multi-class classification (e.g., accuracy, precision/recall, ROC analysis), regression analysis (e.g., mean squared error, root mean squared error, R-squared error), clustering (e.g., Davies–Bouldin Index). It then reviews popular resampling approaches (e.g.,holdout, cross-validation) and statistical tests (e.g., the t-test and the sign test). It concludes with an explanation of why it is important to go beyond these well-known methods in order to achieve reliable evaluation results in all cases.
This introductory chapter, encyclopaedic in nature, covers the main aspects of catastrophe (CAT) risk from a qualitative perspective, offering an overview of what will be explored in quantitative terms in the subsequent chapters. It starts with the definition of the fundamental terms and concepts, such as peril, hazard, risk, uncertainty, probability, and CAT model. It then describes the historical development of catastrophe risk science, which was often influenced by the societal impact of some infamous catastrophes. The main periods are as follows: from ancient myths to medieval texts, mathematization (eighteenth and nineteenth centuries) and computerization (twentieth century). Finally, it provides an exhaustive list of perils categorized by their physical origin, including geophysical, hydrological, meteorological, climatological, biological, extraterrestrial, technological, and socio-economic perils. In total, 42 perils are covered, with historical examples and consequences for people and structures discussed for each one of them.
Chapter 1 contains a sampling of the standard, point-wise convergence theorems dealing with partial sums of independent random variables. These include the Weak and Strong Laws of Large Numbers as well as Hartman–Wintner’s Law of the Iterated Logarithm. In preparation for the law of the iterated logarithm, Cramér’s theory of large deviations from the law of large numbers is developed in §1.3. Everything here is very standard, although I feel that my passage from the bounded to the general case of the law of the iterated logarithm has been considerably smoothed by the ideas that I learned in conversation with M. Ledoux.
Chapter 2 is devoted to the classical Central Limit Theorem. The initial presentation is based on Lindeberg’s non-Fourier techniques. This is followed by a derivation of the Berry–Esseen estimate based on ideas of C. Stein. Fourier techniques are introduced in §2.3, and in the final section the CLT is used to derive W. Beckner’s sharp Lpestimates for the Fourier transform.
Davide Barrera, Università degli Studi di Torino, Italy,Klarita Gërxhani, Vrije Universiteit, Amsterdam,Bernhard Kittel, Universität Wien, Austria,Luis Miller, Institute of Public Goods and Policies, Spanish National Research Council,Tobias Wolbring, School of Business, Economics and Society at the Friedrich-Alexander-University Erlangen-Nürnberg
Chapter 6 addresses the problem of error estimation and resampling in both a theoretical and practical manner. The holdout method is reviewed and cast into the bias/variance framework. Simple resampling approaches such as cross-validation are also reviewed and important variations such as stratified cross-validation and leave-one-out are introduced. Multiple resampling approaches such as bootstrapping, randomization, and multiple trials of simple resampling approaches are then introduced and discussed.
Chapter 2 reviews the principles of statistics that are necessary for the discussion of machine learning evaluation methods, especially the statical analysis discussion of Chapter 7. In particular, it reviews the notions of random variables, distributions, confidence intervals, and hypothesis testing.
from
Part I
-
The Philosophy and Methodology of Experimentation in Sociology
Davide Barrera, Università degli Studi di Torino, Italy,Klarita Gërxhani, Vrije Universiteit, Amsterdam,Bernhard Kittel, Universität Wien, Austria,Luis Miller, Institute of Public Goods and Policies, Spanish National Research Council,Tobias Wolbring, School of Business, Economics and Society at the Friedrich-Alexander-University Erlangen-Nürnberg
The discipline of sociology focuses on interactions and group processes from the perspective of emergent phenomena at the social level. Concepts like social embedding, norms, group-level motivation, or status hierarchies can only be defined and conceptualized in contexts in which individuals are involved in social interaction. Such concepts share the property of being social facts that cannot be changed by individual intention alone and that require some element of individual adjustment to the socially given condition. Sociologists study the embeddedness of individual motivations or preferences in the context of social phenomena as such and the impact of these phenomena on individual adaptation. However, these phenomena can only be observed in individual human behavior, and this tension between the substantive focus on the aggregate level and the analytical focus on the individual level is the challenge that sociological experiments confront.
Davide Barrera, Università degli Studi di Torino, Italy,Klarita Gërxhani, Vrije Universiteit, Amsterdam,Bernhard Kittel, Universität Wien, Austria,Luis Miller, Institute of Public Goods and Policies, Spanish National Research Council,Tobias Wolbring, School of Business, Economics and Society at the Friedrich-Alexander-University Erlangen-Nürnberg
In the introduction, the field of experimental sociology is outlined and the core concepts of manipulation and control, as well as two crucial conditions of control, are introduced. The random allocation of participants to the treatment and the control group ensures that exogenous factors are distributed equally across these groups, which allows to evaluate the effect of the manipulated condition. Incentivization helps operationalizing behavioral assumptions into the experimental condition. The chapter then briefly elaborates on the topics of the following chapters.
Chapter 8 provides an introduction to Gaussian measures on a Banach space from the point of view that originated in the work of N. Wiener and was further developed by L. Gross and I. Segal. The underlying idea is that, even though it cannot fit there, the measure would like to live on the Hilbert space (the Cameron–Martin space) for which it would be the standard Gauss measure, and it is in that Hilbert space that its properties are encoded. A good deal of functional analysis is required to carry out this program, and the estimate that makes the program possible is X. Fernique’s remarkable exponential estimate. Included are derivations of M. Schilder’s large deviations theorem for Brownian motion and V. Strassen’s function space version of the law of the iterated logarithm, both of which confirm the importance of the Cameron–Martin space.