To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let θ (a) be the first time when the range (Rn; n ≧ 0) is equal to a, Rn being equal to the difference of the maximum and the minimum, taken at time n, of a simple random walk on ℤ. We compute the g.f. of θ (a); this allows us to compute the distributions of θ (a) and Rn. We also investigate the asymptotic behaviour of θ (n), n going to infinity.
Kallenberg [2] introduced the concept of F-exchangeable sequences of random variables and produced some characterizations of F-exchangeability in terms of stopping times. In this paper ways of extending the concept of F-exchangeability to doubly indexed arrays of random variables are explored and some characterizations obtained for row and column exchangebale arrays, weakly exchangeable arrays and separately exchangeable continuous processes.
We study a classical stochastic control problem arising in financial economics: to maximize expected logarithmic utility from terminal wealth and/or consumption. The novel feature of our work is that the portfolio is allowed to anticipate the future, i.e. the terminal values of the prices, or of the driving Brownian motion, are known to the investor, either exactly or with some uncertainty. Results on the finiteness of the value of the control problem are obtained in various setups, using techniques from the so-called enlargement of filtrations. When the value of the problem is finite, we compute it explicitly and exhibit an optimal portfolio in closed form.
We derive optimal gambling and investment policies for cases in which the underlying stochastic process has parameter values that are unobserved random variables. For the objective of maximizing logarithmic utility when the underlying stochastic process is a simple random walk in a random environment, we show that a state-dependent control is optimal, which is a generalization of the celebrated Kelly strategy: the optimal strategy is to bet a fraction of current wealth equal to a linear function of the posterior mean increment. To approximate more general stochastic processes, we consider a continuous-time analog involving Brownian motion. To analyze the continuous-time problem, we study the diffusion limit of random walks in a random environment. We prove that they converge weakly to a Kiefer process, or tied-down Brownian sheet. We then find conditions under which the discrete-time process converges to a diffusion, and analyze the resulting process. We analyze in detail the case of the natural conjugate prior, where the success probability has a beta distribution, and show that the resulting limit diffusion can be viewed as a rescaled Brownian motion. These results allow explicit computation of the optimal control policies for the continuous-time gambling and investment problems without resorting to continuous-time stochastic-control procedures. Moreover they also allow an explicit quantitative evaluation of the financial value of randomness, the financial gain of perfect information and the financial cost of learning in the Bayesian problem.
We establish stability, monotonicity, concavity and subadditivity properties for open stochastic storage networks in which the driving process has stationary increments. A principal example is a stochastic fluid network in which the external inputs are random but all internal flows are deterministic. For the general model, the multi-dimensional content process is tight under the natural stability condition. The multi-dimensional content process is also stochastically increasing when the process starts at the origin, implying convergence to a proper limit under the natural stability condition. In addition, the content process is monotone in its initial conditions. Hence, when any content process with non-zero initial conditions hits the origin, it couples with the content process starting at the origin. However, in general, a tight content process need not hit the origin.
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
In 1979, Melamed proved that, in an open migration process, the absence of ‘loops' is necessary and sufficient for the equilibrium flow along a link to be a Poisson process. In this paper, we prove approximation theorems with the same flavour: the difference between the equilibrium flow along a link and a Poisson process with the same rate is bounded in terms of expected numbers of loops. The proofs are based on Stein's method, as adapted for bounds on the distance of the distribution of a point process from a Poisson process in Barbour and Brown (1992b). Three different distances are considered, and illustrated with an example consisting of a system of tandem queues with feedback. The upper bound on the total variation distance of the process grows linearly with time, and a lower bound shows that this can be the correct order of approximation.
The paper introduces an approach focused towards the modelling of dynamics of financial markets. It is based on the three principles of market clearing, exclusion of instantaneous arbitrage and minimization of increase of arbitrage information. The last principle is equivalent to the minimization of the difference between the risk neutral and the real world probability measures. The application of these principles allows us to identify various market parameters, e.g. the risk-free rate of return. The approach is demonstrated on a simple financial market model, for which the dynamics of a virtual risk-free rate of return can be explicitly computed.
The so-called ‘Swiss Army formula', derived by Brémaud, seems to be a general purpose relation which includes all known relations of Palm calculus for stationary stochastic systems driven by point processes. The purpose of this article is to present a short, and rather intuitive, proof of the formula. The proof is based on the Ryll–Nardzewski definition of the Palm probability as a Radon-Nikodym derivative, which, in a stationary context, is equivalent to the Mecke definition.
As well as having complete knowledge of the future, a superprophet can also alter the order of observation as it is presented to a player without foresight, whose strategy is known to the prophet. It is shown that a superprophet can only do twice as well as his counterpart, if the underlying random sequence is independent.
Consider the optimal control problem of leaving an interval (– a, a) in a limited playing time. In the discrete-time problem, a is a positive integer and the player's position is given by a simple random walk on the integers with initial position x. At each time instant, the player chooses a coin from a control set where the probability of returning heads depends on the current position and the remaining amount of playing time, and the player is betting a unit value on the toss of the coin: heads returning +1 and tails − 1. We discuss the optimal strategy for this discrete-time game. In the continuous-time problem the player chooses infinitesimal mean and infinitesimal variance parameters from a control set which may depend upon the player's position. The problem is to find optimal mean and variance parameters that maximize the probability of leaving the interval [— a, a] within a finite time T > 0.
The paper is concerned with the distribution of the level N of the first crossing of a counting process trajectory with a lower boundary. Compound and simple Poisson or binomial processes, gamma renewal processes, and finally birth processes are considered. In the simple Poisson case, expressing the exact distribution of N requires the use of a classical family of Abel–Gontcharoff polynomials. For other cases convenient extensions of these polynomials into pseudopolynomials with a similar structure are necessary. Such extensions being applicable to other fields of applied probability, the central part of the present paper has been devoted to the building of these pseudopolynomials in a rather general framework.
n candidates, represented by n i.i.d. continuous random variables X1, …, Xn with known distribution arrive sequentially, and one of them must be chosen, using a non-anticipating stopping rule. The objective is to minimize the expected rank (among the ranks of X1, …, Xn) of the candidate chosen, where the best candidate, i.e. the one with smallest X-value, has rank one, etc. Let the value of the optimal rule be Vn, and lim Vn = V. We prove that V > 1.85. Limiting consideration to the class of threshold rules of the form tn = min {k: Xk ≦ ak for some constants ak, let Wn be the value of the expected rank for the optimal threshold rule, and lim Wn = W. We show 2.295 < W < 2.327.
A unified way of obtaining stationary time series models with the univariate margins in the convolution-closed infinitely divisible class is presented. Special cases include gamma, inverse Gaussian, Poisson, negative binomial, and generalized Poisson margins. ARMA time series models obtain in the special case of normal margins, sometimes in a different stochastic representation. For the gamma and Poisson margins, some previously defined time series models are included, but for the negative binomial margin, the time series models are different and, in several ways, better than previously defined time series models. The models are related to multivariate distributions that extend a univariate distribution in the convolution-closed infinitely divisible class. Extensions to the non-stationary case and possible applications to modelling longitudinal data are mentioned.
In his recent monograph on Poisson processes, Kingman generalized Rényi's characterization of Poisson processes and also suggested a characteristic functional approach. A direct proof is given along this line.
Denote by A(x) = {a: |aτx| ≦ h} a circle zone on the three-dimensional sphere surface for each given h > 0. For a given integer m, we investigate how many zones chosen randomly are needed to contain at least one of the points on the sphere surface m times. As an application, the lifetime of a sphere roller is investigated. We present empirical formulas for the mean, standard deviation and distribution of the lifetime of the sphere roller. Furthermore, some limit behaviors of the above stopping time are obtained, such as the limit distribution, the law of the iterated logarithm, and the upper and lower bounds of the tail probability with the same convergent order.
Integration with respect to the fractional Brownian motion Z with Hurst parameter is discussed. The predictor is represented as an integral with respect to Z, solving a weakly singular integral equation for the prediction weight function.
We analyse the queue QL at a multiplexer with L sources which may display long-range dependence. This includes, for example, sources modelled by fractional Brownian motion (FBM). The workload processes W due to each source are assumed to have large deviation properties of the form P[Wt/a(t) > x] ≈ exp[– v(t)K(x)] for appropriate scaling functions a and v, and rate-function K. Under very general conditions limL→xL–1 log P[QL > Lb] = – I(b), provided the offered load is held constant, where the shape function I is expressed in terms of the cumulant generating functions of the input traffic. For power-law scalings v(t) = tv, a(t) = ta (such as occur in FBM) we analyse the asymptotics of the shape function limb→xb–u/a(I(b) – δbv/a) = vu for some exponent u and constant v depending on the sources. This demonstrates the economies of scale available though the multiplexing of a large number of such sources, by comparison with a simple approximation P[QL > Lb] ≈ exp[−δLbv/a] based on the asymptotic decay rate δ alone. We apply this formula to Gaussian processes, in particular FBM, both alone, and also perturbed by an Ornstein–Uhlenbeck process. This demonstrates a richer potential structure than occurs for sources with linear large deviation scalings.
We discuss the limits of point processes which are generated by a triangular array of rare events. Such point processes are motivated by the exceedances of a high boundary by a random sequence since exceedances are rare events in this case. This application relates the problem to extreme value theory from where the method is used to treat the asymptotic approximation of these point processes. The presented general approach extends, unifies and clarifies some of the various conditions used in the extreme value theory.
This paper provides a direct approach to obtaining formulas for derivatives of functionals of point processes in rare perturbation analysis ([2], [6]). Results are obtained for arbitrary (not necessarily stationary) point processes in and d, d 2, under transparent conditions, close to minimal. Formulas for higher-order derivatives allow one to construct asymptotical expansions. The results can be useful in sensitivity analysis, in light traffic theory for queues and for computation by simulation of derivatives at positive intensity, while the computation of the derivatives via statistical estimation of the functional itself and its increments usually gives poor results.