To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
For τ, a stopping rule adapted to a sequence of n independent and identically distributed observations, we define the loss to be E[q(Rτ)], where Rj is the rank of the jth observation and q is a nondecreasing function of the rank. This setting covers both the best-choice problem, with q(r) = 1(r > 1), and Robbins' problem, with q(r) = r. As n tends to ∞, the stopping problem acquires a limiting form which is associated with the planar Poisson process. Inspecting the limit we establish bounds on the stopping value and reveal qualitative features of the optimal rule. In particular, we show that the complete history dependence persists in the limit; thus answering a question asked by Bruss (2005) in the context of Robbins' problem.
In this paper we introduce an exponential continuous-time GARCH(p, q) process. It is defined in such a way that it is a continuous-time extension of the discrete-time EGARCH(p, q) process. We investigate stationarity, mixing, and moment properties of the new model. An instantaneous leverage effect can be shown for the exponential continuous-time GARCH(p, p) model.
In this paper we develop some results presented by Gani (2004), deriving moments for random allocation processes. These moments correspond to the allocation processes reaching some domain boundary. Exact formulae for means, variances, and probability generating functions as well as some asymptotic formulae for moments of random allocation processes are obtained. A special choice of the asymptotics and of the domain allows us to reduce a complicated numerical procedure to a simple asymptotic one.
We consider a critical discrete-time branching process with generation dependent immigration. For the case in which the mean number of immigrating individuals tends to ∞ with the generation number, we prove functional limit theorems for centered and normalized processes. The limiting processes are deterministically time-changed Wiener, with three different covariance functions depending on the behavior of the mean and variance of the number of immigrants. As an application, we prove that the conditional least-squares estimator of the offspring mean is asymptotically normal, which demonstrates an alternative case of normality of the estimator for the process with nondegenerate offspring distribution. The norming factor is where α(n) denotes the mean number of immigrating individuals in the nth generation.
We prove that the Bartlett spectrum of a stationary, infinitely divisible (ID) random measure determines ergodicity, weak mixing, and mixing. In this context, the Bartlett spectrum plays the same role as the spectral measure of a stationary Gaussian process.
We construct a process with gamma increments, which has a given convex autocorrelation function and asymptotically a self-similar limit. This construction validates the use of long-range dependent t and variance-gamma subordinator models for actual financial data as advocated in Heyde and Leonenko (2005) and Finlay and Seneta (2006), in that it allows for noninteger-valued model parameters to occur as found empirically by data fitting.
For a spectrally negative Lévy process X on the real line, let S denote its supremum process and let I denote its infimum process. For a > 0, let τ(a) and κ(a) denote the times when the reflected processes Ŷ := S − X and Y := X − I first exit level a, respectively; let τ−(a) and κ−(a) denote the times when X first reaches Sτ(a) and Iκ(a), respectively. The main results of this paper concern the distributions of (τ(a), Sτ(a), τ−(a), Ŷτ(a)) and of (κ(a), Iκ(a), κ−(a)). They generalize some recent results on spectrally negative Lévy processes. Our approach relies on results concerning the solution to the two-sided exit problem for X. Such an approach is also adapted to study the excursions for the reflected processes. More explicit expressions are obtained when X is either a Brownian motion with drift or a completely asymmetric stable process.
Consider a financial market in which an agent trades with utility-induced restrictions on wealth. For a utility function which satisfies the condition of reasonable asymptotic elasticity at -∞, we prove that the utility-based superreplication price of an unbounded (but sufficiently integrable) contingent claim is equal to the supremum of its discounted expectations under pricing measures with finite loss-entropy. For an agent whose utility function is unbounded from above, the set of pricing measures with finite loss-entropy can be slightly larger than the set of pricing measures with finite entropy. Indeed, the former set is the closure of the latter under a suitable weak topology. Central to our proof is a proof of the duality between the cone of utility-based superreplicable contingent claims and the cone generated by pricing measures with finite loss-entropy.
We consider the generalized version in continuous time of the parking problem of Knuth introduced in Bansaye (2006). Files arrive following a Poisson point process and are stored on a hardware identified with the real line, at the right of their arrival point. Here we study the evolution of the endpoints of the data block straddling 0, which is empty at time 0 and is equal to R at a deterministic time.
A stack is a structural unit in an RNA structure that is formed by pairs of hydrogen bonded nucleotides. Paired nucleotides are scored according to their ability to hydrogen bond. We consider stack/hairpin-loop structures for a sequence of independent and identically distributed random variables with values in a finite alphabet, and we show how to obtain an asymptotic Poisson distribution of the number of stack/hairpin-loop structures with a score exceeding a high threshold, given that we count in a proper, declumped way. From this result we obtain an asymptotic Gumbel distribution of the maximal stack score. We also provide examples focusing on the computation of constants that enter in the asymptotic distributions. Finally, we discuss the close relation to existing results for local alignment.
The transmission control protocol (TCP) is a transport protocol used in the Internet. In Ott (2005), a more general class of candidate transport protocols called ‘protocols in the TCP paradigm’ was introduced. The long-term objective of studying this class is to find protocols with promising performance characteristics. In this paper we study Markov chain models derived from protocols in the TCP paradigm. Protocols in the TCP paradigm, as TCP, protect the network from congestion by decreasing the ‘congestion window’ (i.e. the amount of data allowed to be sent but not yet acknowledged) when there is packet loss or packet marking, and increasing it when there is no loss. When loss of different packets are assumed to be independent events and the probability p of loss is assumed to be constant, the protocol gives rise to a Markov chain {Wn}, where Wn is the size of the congestion window after the transmission of the nth packet. For a wide class of such Markov chains, we prove weak convergence results, after appropriate rescaling of time and space, as p → 0. The limiting processes are defined by stochastic differential equations. Depending on certain parameter values, the stochastic differential equation can define an Ornstein-Uhlenbeck process or can be driven by a Poisson process.
A stochastic model of a dynamic marker array in which markers could disappear, duplicate, and move relative to its original position is constructed to reflect on the nature of long DNA sequences. The sequence changes of deletions, duplications, and displacements follow the stochastic rules: (i) the original distribution of the marker array {…, X−2, X−1, X0, X1, X2, …} is a Poisson process on the real line; (ii) each marker is replicated l times; replication or loss of marker points occur independently; (iii) each replicated point is independently and randomly displaced by an amount Y relative to its original position, with the Y displacements sampled from a continuous density g(y). Limiting distributions for the maximal and minimal statistics of the r-scan lengths (collection of distances between r + 1 successive markers) for the l-shift model are derived with the aid of the Chen-Stein method and properties of Poisson processes.
In this paper we obtain the exact asymptotics of the ruin probability for the integrated Gaussian process with force of interest. The results obtained are consistent with those obtained for the case in which there is no force of interest.
We study the optimal portfolio problem for an insider, in the case where the performance is measured in terms of the logarithm of the terminal wealth minus a term measuring the roughness and the growth of the portfolio. We give explicit solutions in some cases. Our method uses stochastic calculus of forward integrals.
We derive an asymptotic expansion for the distribution of a compound sum of independent random variables, all having the same rapidly varying subexponential distribution. The examples of a Poisson and geometric number of summands serve as an illustration of the main result. Complete calculations are done for a Weibull distribution, with which we derive, as examples and without any difficulties, seven-term expansions.
Corrected random walk approximations to continuous-time optimal stopping boundaries for Brownian motion, first introduced by Chernoff and Petkau, have provided powerful computational tools in option pricing and sequential analysis. This paper develops the theory of these second-order approximations and describes some new applications.
In this paper we present closed form solutions of some discounted optimal stopping problems for the maximum process in a model driven by a Brownian motion and a compound Poisson process with exponential jumps. The method of proof is based on reducing the initial problems to integro-differential free-boundary problems, where the normal-reflection and smooth-fit conditions may break down and the latter then replaced by the continuous-fit condition. We show that, under certain relationships on the parameters of the model, the optimal stopping boundary can be uniquely determined as a component of the solution of a two-dimensional system of nonlinear ordinary differential equations. The obtained results can be interpreted as pricing perpetual American lookback options with fixed and floating strikes in a jump-diffusion model.
The Vardi casino with parameter 0 < c < 1 consists of infinitely many tables indexed by their odds, each of which returns the same (negative) expected winnings -c per dollar. A gambler seeks to maximize the probability of reaching a fixed fortune by gambling repeatedly with suitably chosen stakes and tables (odds). The optimal strategy is derived explicitly subject to the constraint that the gambler is allowed to play only a given finite number of times. Some properties of the optimal strategy are also discussed.
We investigate the large scale behaviour of a Lévy process whose jump magnitude follows a stable law with spherically inhomogenous scaling coefficients. Furthermore, the jumps are dragged in the spherical direction by a dynamical system which has an attractor.
We develop an integration by parts technique for point processes, with application to the computation of sensitivities via Monte Carlo simulations in stochastic models with jumps. The method is applied to density estimation with respect to the Lebesgue measure via a modified kernel estimator which is less sensitive to variations of the bandwidth parameter than standard kernel estimators. This applies to random variables whose densities are not analytically known and requires the knowledge of the point process jump times.