To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we study the tail and the extremal behaviors of stationary solutions of threshold autoregressive (TAR) models. It is shown that a regularly varying noise sequence leads in general to only an O-regularly varying tail of the stationary solution. Under further conditions on the partition, it is shown however that TAR(S,1) models of order 1 with S regimes have regularly varying tails, provided that the noise sequence is regularly varying. In these cases, the finite-dimensional distribution of the stationary solution is even multivariate regularly varying and its extremal behavior is studied via point process convergence. In particular, a TAR model with regularly varying noise can exhibit extremal clusters. This is in contrast to TAR models with noise in the maximum domain of attraction of the Gumbel distribution and which is either subexponential or in ℒ(γ) with γ > 0. In this case it turns out that the tail of the stationary solution behaves like a constant times that of the noise sequence, regardless of the order and the specific partition of the TAR model, and that the process cannot exhibit clusters on high levels.
In this paper we consider the following problem. An angler buys a fishing ticket that allows him/her to fish for a fixed time. There are two locations to fish at the lake. The fish are caught according to a renewal process, which is different for each fishing location. The angler's success is defined as the difference between the utility function, which is dependent on the size of the fish caught, and the time-dependent cost function. These functions are different for each fishing location. The goal of the angler is to find two optimal stopping times that maximize his/her success: when to change fishing location and when to stop fishing. Dynamic programming methods are used to find these two optimal stopping times and to specify the expected success of the angler at these times.
In this article we analyse the behaviour of the extremes of a random walk in a random scenery. The random walk is assumed to be in the domain of attraction of a stable law, and the scenery is assumed to be in the domain of attraction of an extreme value distribution. The resulting random sequence is stationary and strongly dependent if the underlying random walk is recurrent. We prove a limit theorem for the extremes of the resulting stationary process. However, if the underlying random walk is recurrent, the limit distribution is not in the class of classical extreme value distributions.
Consider an urn model whose replacement matrix is triangular, has all nonnegative entries, and the row sums are all equal to 1. We obtain strong laws for the counts of balls corresponding to each color. The scalings for these laws depend on the diagonal elements of a rearranged replacement matrix. We use these strong laws to study further behavior of certain three-color urn models.
Chiu and Yin (2005) found the Laplace transform of the last time a spectrally negative Lévy process, which drifts to ∞, is below some level. The main motivation for the study of this random time stems from risk theory: what is the last time the risk process, modeled by a spectrally negative Lévy process drifting to ∞, is 0? In this paper we extend the result of Chiu and Yin, and we derive the Laplace transform of the last time, before an independent, exponentially distributed time, that a spectrally negative Lévy process (without any further conditions) exceeds (upwards or downwards) or hits a certain level. As an application, we extend a result found in Doney (1991).
In this paper we generalize existing results for the steady-state distribution of growth-collapse processes. We begin with a stationary setup with some relatively general growth process and observe that, under certain expected conditions, point- and time-stationary versions of the processes exist as well as a limiting distribution for these processes which is independent of initial conditions and necessarily has the marginal distribution of the stationary version. We then specialize to the cases where an independent and identically distributed (i.i.d.) structure holds and where the growth process is a nondecreasing Lévy process, and in particular linear, and the times between collapses form an i.i.d. sequence. Known results can be seen as special cases, for example, when the inter-collapse times form a Poisson process or when the collapse ratio is deterministic. Finally, we comment on the relation between these processes and shot-noise type processes, and observe that, under certain conditions, the steady-state distribution of one may be directly inferred from the other.
Suppose that an unknown number of objects arrive sequentially according to a Poisson process with random intensity λ on some fixed time interval [0,T]. We assume a gamma prior density Gλ(r, 1/a) for λ. Furthermore, we suppose that all arriving objects can be ranked uniquely among all preceding arrivals. Exactly one object can be selected. Our aim is to find a stopping time (selection time) which maximizes the time during which the selected object will stay relatively best. Our main result is the following. It is optimal to select the ith object that is relatively best and arrives at some time si(r) onwards. The value of si(r) can be obtained for each r and i as the unique root of a deterministic equation.
Given a pure-jump subordinator (i.e. nondecreasing Lévy process with no drift) with continuous Lévy measure ν, we derive a formula for the distribution function Fs (x; t) at time t of the associated subordinator whose Lévy measure is the restriction of ν to (0,s]. It will be expressed in terms of ν and the marginal distribution function F (⋅; t) of the original process. A generalization concerning an arbitrary truncation of ν will follow. Under certain conditions, an analogous formula will be obtained for the nth derivative, ∂nFs (x; t) ∂ xn. The requirement that ν is continuous is shown to have no intrinsic meaning. A number of interesting results involving the size ordered jumps of subordinators will be derived. An appropriate approximation for the small jumps of a gamma process will be considered, leading to a revisiting of the generalized Dickman distribution.
Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing limit laws, due to Nualart, Peccati, and others.
Let n points be chosen independently and uniformly in the unit cube [0,1]d, and suppose that each point is supplied with a mark, the marks being independent and identically distributed random variables independent of the location of the points. To each cube R contained in [0,1]d we associate its score defined as the sum of marks of all points contained in R. The scan statistic is defined as the maximum of taken over all cubes R contained in [0,1]d. We show that if the marks are nonlattice random variables with finite exponential moments, having negative mean and assuming positive values with nonzero probability, then the appropriately normalized distribution of the scan statistic converges as n → ∞ to the Gumbel distribution. We also prove a corresponding result for the scan statistic of a Lévy noise with negative mean. The more elementary cases of zero and positive mean are also considered.
A gambler with an initial bankroll is faced with a finite sequence of identical and independent bets. For each bet, he may wager up to his current bankroll, and will win this amount with probability p or lose it with probability 1-p. His problem is to devise a wagering strategy that will maximize his final expected utility with the side condition that the total amount wagered (i.e. the total ‘action’) be at least his initial bankroll. Our main result is an expression that characterizes when the strategy of placing equal-sized wagers on all bets is optimal. In particular, for a given bankroll B, utility function f (concave, increasing, differentiable), and n bets, we show that it is optimal to wager b/n on each bet if and only if the probability of winning each bet is less than or equal to some value p⋆∈[½,1] (where p⋆ is an explicit function of B, f, and n). We prove the result by using a basic nonlinear programming technique.
Let I1,I2,…,In be independent indicator functions on some probability space We suppose that these indicators can be observed sequentially. Furthermore, let T be the set of stopping times on (Ik), k=1,…,n, adapted to the increasing filtration where The odds algorithm solves the problem of finding a stopping time τ ∈ T which maximises the probability of stopping on the last Ik=1, if any. To apply the algorithm, we only need the odds for the events {Ik=1}, that is, rk=pk/(1-pk), where The goal of this paper is to offer tractable solutions for the case where the pk are unknown and must be sequentially estimated. The motivation is that this case is important for many real-world applications of optimal stopping. We study several approaches to incorporate sequential information. Our main result is a new version of the odds algorithm based on online observation and sequential updating. Questions of speed and performance of the different approaches are studied in detail, and the conclusiveness of the comparisons allows us to propose always using this algorithm to tackle selection problems of this kind.
By expressing the discounted net loss process as a randomly weighted sum, we investigate the finite-time ruin probabilities for the Poisson risk model with an exponential Lévy process investment return and heavy-tailed claims. It is found that in finite time, however, the extreme of insurance risk dominates the extreme of financial risk, but, for the case of dangerous investment (see Klüppelberg and Kostadinova (2008) for an accurate definition of dangerous investment), the extreme of financial risk has more and more of an effect on the total risk, and as time passes, the extreme of financial risk finally dominates the extreme of insurance risk.
We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian bridge. Our examples constitute natural finite-horizon optimal stopping problems with explicit solutions.
This paper is concerned with a nonstationary Markovian chain of cascading damage that constitutes an iterated version of a classical damage model. The main problem under study is to determine the exact distribution of the total outcome of this process when the cascade of damages finally stops. Two different applications are discussed, namely the final size for a wide class of SIR (susceptible → infective → removed) epidemic models and the total number of failures for a system of components in reliability. The starting point of our analysis is the recent work of Lefèvre (2007) on a first-crossing problem for the cumulated partial sums of independent parametric distributions, possibly nonstationary but stable by convolution. A key mathematical tool is provided by a nonstandard family of remarkable polynomials, called the generalised Abel–Gontcharoff polynomials. Somewhat surprisingly, the approach followed will allow us to relax some model assumptions usually made in epidemic theory and reliability. To close, approximation by a branching process is also investigated to a certain extent.
We study the discrete-time approximation of doubly reflected backward stochastic differential equations (BSDEs) in a multidimensional setting. As in Ma and Zhang (2005) or Bouchard and Chassagneux (2008), we introduce the discretely reflected counterpart of these equations. We then provide representation formulae which allow us to obtain new regularity results. We also propose an Euler scheme type approximation and give new convergence results for both discretely and continuously reflected BSDEs.
We consider Monte Carlo methods for the classical nonlinear filtering problem. The first method is based on a backward pathwise filtering equation and the second method is related to a backward linear stochastic partial differential equation. We study convergence of the proposed numerical algorithms. The considered methods have such advantages as a capability in principle to solve filtering problems of large dimensionality, reliable error control, and recurrency. Their efficiency is achieved due to the numerical procedures which use effective numerical schemes and variance reduction techniques. The results obtained are supported by numerical experiments.
We consider a modified version of the classical optimal dividends problem of de Finetti in which the objective function is altered by adding in an extra term which takes account of the ruin time of the risk process, the latter being modeled by a spectrally negative Lévy process. We show that, with the exception of a small class, a barrier strategy forms an optimal strategy under the condition that the Lévy measure has a completely monotone density. As a prerequisite for the proof, we show that, under the aforementioned condition on the Lévy measure, the q-scale function of the spectrally negative Lévy process has a derivative which is strictly log-convex.
Let X1, X2,… and Y1, Y2,… be two sequences of absolutely continuous, independent and identically distributed (i.i.d.) random variables with equal means E(Xi)=E(Yi), i=1,2,… In this work we provide upper bounds for the total variation and Kolmogorov distances between the distributions of the partial sums ∑i=1nXi and ∑i=1nYi. In the case where the distributions of the Xis and the Yis are compared with respect to the convex order, the proposed upper bounds are further refined. Finally, in order to illustrate the applicability of the results presented, we consider specific examples concerning gamma and normal approximations.
Let X1, X2, …, Xn be independent random variables uniformly distributed on [0,1]. We observe these sequentially and have to stop on exactly one of them. No recall of preceding observations is permitted. What stopping rule minimizes the expected rank of the selected observation? What is the value of the expected rank (as a function of n) and what is the limit of this value when n goes to ∞? This full-information expected selected-rank problem is known as Robbins' problem of minimizing the expected rank, and its general solution is unknown. In this paper we provide an alternative approach to Robbins' problem. Our model is similar to that of Gnedin (2007). For this, we consider a continuous-time version of the problem in which the observations follow a Poisson arrival process on ℝ+ × [0,1] of homogeneous rate 1. Translating the previous optimal selection problem in this setting, we prove that, under reasonable assumptions, the corresponding value function w(t) is bounded and Lipschitz continuous. Our main result is that the limiting value of the Poisson embedded problem exists and is equal to that of Robbins' problem. We prove that w(t) is differentiable and also derive a differential equation for this function. Although we have not succeeded in using this equation to improve on bounds on the optimal limiting value, we argue that it has this potential.