To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this appendix, I will present some of the basic ideas of the mathematical theory of probability. As in the case of Appendix 1, this will not be a comprehensive or detailed survey; it is only intended to introduce the basic formal probability concepts and rules used in this book, and to clarify the terminology and notation used in this book. Here I will discuss only the abstract and formal calculus of probability; in Chapter 1, the question of interpretation is addressed.
A probability function, Pr, is any function (or rule of association) that assigns to (or associates with) each element X of some Boolean algebra B (see Appendix 1) a real number, Pr(X), in accordance with the following three conditions:
For all X and Y in B,
Pr(X) 0;
Pr(X) = 1, if X is a tautology (that is, if X is logically true, or X = 1 in B);
Pr(X∨Y) = Pr(X) + Pr(Y), if X&Y is a contradiction (that is, if X&Y is logically false, or X&Y = 0 in B).
These three conditions are the probability axioms, also called “the Kolmogorov axioms” (for Kolmogorov 1933). A function Pr that satisfies the axioms, relative to an algebra B, is said to be a probability function on B – that is, with “domain” B (that is, the set of propositions of B) and range the closed interval [0,1]. In what follows, reference to an assumed algebra B will be implicit.
My interest in probabilistic causality arose naturally from my earlier interest, as a graduate student and after, in the area of the philosophical foundations of decision theory. I was especially interested in the decision-theoretical puzzle known as Newcomb's paradox and the idea of causal decision theory (Eells 1982). Causal decision theory was designed to accommodate the fact that the evidential, or “average” probabilistic, significance of one factor or event for another need not coincide with the causal significance of the first factor or event for the other – a fact vividly illustrated by the Newcomb problem. Causal decision theory involves ideas and techniques quite similar to the ideas and techniques involved in untangling and understanding the relations between probabilistic and causal significance in the theory of probabilistic causality.
Most of the recent philosophical literature in this area has seemed to concentrate on what I call here type-level probabilistic causation, though some authors have either noted or developed theories of what I call here token-level probabilistic causation. The first five chapters of this book are about type-level probabilistic causation. The last, very long, chapter is on token-level probabilistic causation. It is probably Chapter 6, which gives a new theory of token-level probabilistic causation, that contains the most novel proposals of this book.
In the past 30 years or so, philosophers have become increasingly interested in developing and understanding probabilistic conceptions of causality – conceptions of causality according to which causes need not necessitate their effects, but only, to put it very roughly, raise the probabilities of their effects. This philosophical project is of interest not only because the problem of the nature of causation is itself so central in philosophy, and not only because of the nature of causation as well as physical indeterminism in current scientific theory. The theory of probabilistic causation also has applications in other philosophical problems, such as the nature of scientific explanation and the nature of probabilistic laws in a variety of sciences, as well as the character of rational decision. And the theory has applications in these areas whether or not determinism is assumed. In this book, however, very little is said about such applications. I focus on the theory of probabilistic causation itself.
In philosophy, the development of the probabilistic view of causality owes much to the work of I. J. Good (1961–2), Patrick Suppes (1970), Wesley Salmon (1971, 1978, 1984), and Nancy Cartwright (1979) (as well as others). In this book, I articulate and defend a conception of probabilistic causation that owes much to, but differs in important details from, the work of these and other authors. I also examine and appraise several alternatives to the ideas advanced here.
In Chapter 2, a spurious correlation of a factor Y with a factor X was characterized as a situation in which, because of separate causes of Y, the degree of correlation of Y with X is different from the degree of causal significance of X for Y. The possibility of a spurious correlation of Y with X was diagnosed as arising when there are factors Z that are correlated with X and that are positive, negative, or mixed causes of Y, independently of X, where the correlation in question may be unconditional or conditional on other such factors Z. But as noted in Chapter 2, not all cases in which X is correlated with separate causes of Y give rise to a spurious correlation: The separate causes must be causally independent of X. In Chapter 3, we saw that when a factor X interacts, with respect to a factor Y, with a factor Z that is causally independent of X, then we should say that X is causally mixed for Y. But as noted in Chapter 3, not all cases in which a factor X interacts with a factor Z in the production of a factor Y are cases of mixed causal relevance of X for Y. Again, Z must be causally independent of X.
As explained in the introduction, the relation I am calling “token causation” is a relation between two actually occurring, concrete, token events, while type-level causation relates abstract entities called “properties,” “types,” or “factors.” In the preceding chapters, I used upper case italicized letters to represent factors. Now we need to refer to token events, and I will use lower case italicized letters, x, y, z, and so on, for this purpose. As explained more fully below, the relation I wish to analyze in this chapter, in terms of probability relations, is roughly this (where x is of type X and y is of type Y): x's being of type X caused (atemporally) y's being of type Y. Another way of putting it is as follows. Where x takes place at time and place <tx,sx> and y takes place at time and place <ty,sy>, the relation I wish to analyze is this: things’ being X at <tx,sx caused things to be Y at <ty, sy.
The basic idea in the probabilistic theory of type level causation was that causes raise the probability of their effects. We saw that this idea needed several qualifications. The possibilities of spurious correlation and causal interaction had to be accommodated, and it was necessary to build into the theory the requirement that causes precede their effects in time.
Necessary conditions are given for the Hermite–Fejér interpolation polynomials based at the zeros of orthogonal polynomials to converge in weighted Lp spaces at the Jackson rate. These conditions are known to be sufficient in the case of the generalized Jacobi polynomials.
Introduction
The first detailed study of weighted mean convergence of Hermite–Fejér interpolation based at the zeros of orthogonal polynomials was accomplished in [13] and [14], where it was shown that some of the most delicate problems associated with mean convergence of Hermite–Fejér interpolation can be approached through the general theory of orthogonal polynomials; in particular, a distinguished role is played by Christoffel functions. As opposed to Lagrange interpolation operators, Hermite–Fejér interpolation operators are not projectors, and thus in general the rate of convergence cannot be expected to equal the rate of the best approximation. Nevertheless, Jackson rates can be obtained.
Unaware of the general theory in [13] and [14] and of a variety of technical tools developed in [6], [9] and [10] (see [11] for a survey), A. K. Varma & J. Prasad in [22] investigated mean convergence of Hermite–Fejér interpolation in a particular case, namely in the case of interpolation based at the zeros of the Chebyshev polynomials. Subsequently, P. Vértesi & Y. Xu [23] wrote a paper dealing with the case of generalized Jacobi polynomials. However, their results left a significant gap between the necessary and the sufficient conditions.
Extending a theorem of Alon, we prove a conjecture of Katchalski that every graph of order n and minimal degree at least n/k > 1 contains a cycle of length at least n/(k - 1). The result is best possible for all values of n and k (2 ≤ k < n).
Introduction
A well-known result of Erdös and Gallai [4] states that, for n ≥ k ≥ 3, a graph of order n and size greater than ½(k - 1)(n - 1) has circumference at least k, that is, it contains a cycle of length at least k. According to Dirac's [3] classical theorem, every graph of order n ≥ 3 and minimal degree at least ½n is Hamiltonian. What can one say about the circumference of a graph of order n and minimal degree at least d ≥ 2? Recently Alon [1] came close to giving a complete answer to this question when he proved that for 2 ≤ k < n every graph of order n and minimal degree at least n/k has circumference at least [n/(k - 1)]. Our aim here is to improve on this slightly, namely to show that the assertion holds without the integer sign, as conjectured by Katchalski. Although this seems to need a surprising amount of work, we feel it is worth it since the new result is best possible for all values of n and k (2 ≤ k < n) and implies a complete answer to the question above concerning the minimal circumference of a graph of order n and minimal degree d ≥ 2.
This volume is dedicated to Paul Erdős, who has profoundly influenced mathematics this century. He has worked in number theory, complex analysis, probability theory, geometry, interpolation theory, algebra, set theory and, perhaps above all, in combinatorics. His theorems and conjectures have had a decisive impact. In particular, he, more than anybody else, is the founder of modern combinatorics, he pioneered probabilistic number theory, he is the master of random methods in analysis and combinatorics, and he has created the fields of Ramsey theory and the partition calculus of set theory.
Paul Erdős is the consummate problem solver: his hallmark is the succinct and clever argument, often leading to a solution from ‘the book’. He loves areas of mathematics which do not require an excessive amount of technical knowledge but give scope for ingenuity and surprise. The mathematics of Paul Erdos is the mathematics of beauty and insight.
One of the most attractive ways in which Paul Erdős has influenced mathematics is through a host of stimulating problems and conjectures, to many of which he has attached money prizes, in accordance with their notoriety. He often says that he could not pay up if all his problems were solved at once, but neither could the strongest bank if all its customers withdrew their money at the same time. And the latter is far more likely.