To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This is the first paper in a series devoted to Green's and Dirichlet spaces. In the next publications we shall study the spaces associated with fine Markov processes and with a certain class of multiparameter processes.
For the Brownian motion with exponential killing, the Dirichlet space is Sobolev's space H1 and Green's space is the dual space H−1. Both spaces are widely used in the theory of the free field (arising in quantum field theory). General Dirichlet and Green's spaces can be applied in an analogous way to Gaussian random fields associated with Markov processes [2].
Axiomatic theory of Dirichlet spaces was developed by Beurling and Deny [1]. Silverstein [5] and Fukushima [3] investigated the relation between Dirichlet spaces and Markov processes.
We start from a symmetric Markov transition function and we deal simultaneously with a pair: the Dirichlet space H and Green's space K. They are in a natural duality and they play symmetric roles but, in some respects, K is simpler than H. We consider several models for K and H. In particular, we represent them by L-valued functions of time t where L is a functional Hilbert space. We get the conventional representation of H by passage to the limit as t → ∞. Analogously, letting t → 0, we arrive at a representation of K by distributions (generalized functions).
This paper provides a necessary and sufficient condition for a measure to be invariant for a Markov process. The condition is expressed in terms of the q-matrix assumed to generate the process.
Introduction
Let Q = (qij, i, j ∈ S) be a stable, conservative, regular and irreducible q-matrix over a countable state space S, and let P(t) = (Pij(t), i, j, ∈ S) be the matrix of transition probabilities of the Markov process determined by Q. If(the Markov process determined by) Q is recurrent then the relations
have a solution m = (mi, i ∈ S), unique up to constant multiples. Call m an invariant measure for P(t) if
When Q is positive recurrent it is known (Doob [5], Kendall and Reuter [13]) that a solution m to (1) is an invariant measure for P(t). This conclusion also holds when Q is null recurrent, but may not when Q is transient. When Q is transient the set of solutions to (1) may be empty or it may contain linearly independent elements: we obtain a necessary and sufficient condition for a given element of the set to be an invariant measure for P(t).
The basic properties of Markov processes which will be needed are taken from Kendall [11] and are briefly stated in Section 2: they can also be found in [3], [6], [10], [12], [13] and [17]. Section 3 contains the main result of the present paper. Here it is shown that a solution to (1) is an invariant measure for P(t) if and only if a time-reversed q-matrix, defined in terms of m and Q, is regular. It is convenient to obtain the result assuming only that Q is stable and conservative, with P(t) the minimal (Feller) transition matrix determined by Q.
Most of the papers compiled in this volume have been published in Uspekhi Matematicheskikh Nauk and translated into English in the Russian Mathematical Surveys. The core consists of the series [IV], [V], [VI], [VII] presenting a new approach to Markov processes (especially to the Martin boundary theory and the theory of duality) with the following distinctive features:
The general non-homogeneous theory precedes the homogeneous one. This is natural because non-homogeneous Markov processes are invariant with respect to all monotone transformations of time scale – a property which is destroyed in the homogeneous case by the introduction of an additional structure: a one-parameter semi-group of shifts. In homogeneous theory, the probabilistic picture is often obscured by the technique of Laplace transforms.
All the theory is invariant with respect to time reversion. We consider processes with random birth and death times and we use on equal terms the forward and backward transition probabilities, i.e., the conditional probability distributions of the future after t and of the past before t given the state at time t. (This is an alternative to introducing a pair of processes in duality defined on different sample spaces.)
ABSTRACT. Let H be a class of measures or functions. An element h of H is minimal if the relation h = h1 + h2, h1, h2 ∈ H implies that h1, h2 are proportional to h. We give a limit procedure for computing minimal excessive measures for an arbitrary Markov semigroup Tt in a standard Borel space E. Analogous results for excessive functions are obtained assuming that an excessive measure γ on E exists such that Tf = 0 if f = 0 γ-a.e. In the Appendix, we prove that each excessive element can be decomposed into minimal elements and that such a decomposition is unique.
Introduction.
In 1941 R. S. Martin [13] published a paper where positive harmonic functions in a domain D of a Euclidean space were investigated. Let H stand for the class of all such functions subject to condition f(a) < ∞ where a is a fixed point of D. Martin has proved that:
(a) each element of H can be decomposed in a unique way into minimal elements normalized by the condition f(a) = 1;
(b) if the Green function of the Laplacian in D is known, then all minimal elements can be computed by a certain limit process.
J. L. Doob [2] has discovered that the Martin decomposition of harmonic functions is closely related to the behaviour of Brownian paths at the first exit time from D. G. A. Hunt [9] has shown that, using these relations, it is possible to get Martin's results by probabilistic considerations. Actually only discrete Markov chains were treated in [1] and [5], however, the methods are applicable to Brownian motion as well.
The intimate connection between Markov processes and problems in analysis has been apparent ever since the theory of the former began to develop. It is not without reason that A. N. Kolmogorov's paper [39] (Russian translation [38]) of 1931, which is of fundamental importance in this domain, was entitled “On analytical methods in probability theory”. The investigation of these connections also forms, to a large extent, the subject matter of A. Ya. Khinchin's book of 1933 on “Asymptotic laws of the theory of probability” [52] (Russian translation [51]).
In the fifties, and more particularly during the last five years, the theory of Markov processes entered a new period of intense growth. If previously the connections between probability theory and analysis were somewhat one-sided, probability theory applying results and methods of analysis, now the opposite tendency increasingly asserts itself, and probabilistic methods are applied to the solution of problems of analysis. Methods belonging to the theory of probability not only suggest a heuristic approach, but also, in many cases, yield rigourous proofs of analytic results. Applications of the methods of the theory of semigroups of linear operators have led to far-reaching advances in the classification of wide classes of Markov processes. New and deep connections between the theory of Markov processes and potential theory have been discovered. The foundations of the theory have been critically re-examined; the new concept of a strongly Markovian process has acquired a crucial importance in the whole theory of Markov processes.
This article is concerned with the foundations of the theory of Markov processes. We introduce the concepts of a regular Markov process and the class of such processes. We show that regular processes possess a number of good properties (strong Markov character, continuity on the right of excessive functions along almost all trajectories, and so on). A class of regular Markov processes is constructed by means of an arbitrary transition function (regular re-construction of the canonical class). We also prove a uniqueness theorem.
We diverge from tradition in three respects:
a) we investigate processes on an arbitrary random time interval;
b) all definitions and results are formulated in terms of measurable structures without the use of topology (except for the topology of the real line);
c) our main objects of study are non-homogeneous processes (homogeneous ones are discussed as an important special case).
In consequence of a), the theory is highly symmetrical: there is no longer disparity between the birth time α of the process, which is usually fixed, and the death time β, which is considered random.
Principle b) does not prevent us from introducing, when necessary, various topologies in the state space (as systems of coordinates are introduced in geometry). However, it is required that the final statements should be invariant with respect to the choice of such a topology.
Finally, the main gain from c) is simplification of the theory: discarding the “burden of homogeneity” we can use constructions which, generally speaking, destroy this homogeneity.
Similar questions have been considered (for the homogeneous case) by Knight [8], Doob [2], [3] and other authors.
A great deal of research into the theory of random processes is concerned with the problem of constructing a process that has certain properties of regularity of the trajectories and has the same finite-dimensional probability distribution as a given stochastic process xt. It is a complicated theory and one that is difficult to apply to those properties that we most need for the study of Markov processes (the strong Markov property, quasi-left-continuity, and the like.)
The problem can be usefully reformulated. In an actual experiment we do not observe the state xt at a fixed instant t, but rather events that occupy certain time intervals. This is the motivation behind the Gel'fand-Itô theory of generalized random processes. Kolmogorov, in 1972, proposed an even more general concept of a stochastic process as a system of σ-algebras ℱ(I) labelled by time intervals I. Developing this approach, we introduce the concept of a Markov representation xt of the stochastic system ℱ (I) and prove the existence of regular representations. We construct two dual regular representations (the right and the left), which we then combine into a single Markov process by two methods, the “vertical” and the “horizontal” method. We arrive at a general duality theory, which provides a natural framework for the fundamental results on entrance and exit spaces, excessive measures and functions, additive functionals, and others. The initial steps in the construction of this theory were taken in [6]. The note [5] deals with applications to additive functionals (detailed proofs are in preparation). We consider random processes defined in measurable spaces without any topology: the introduction of a reasonable topology allows of a certain arbitrariness.