We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Iterative Filtering (IF) is an alternative technique to the Empirical Mode Decomposition (EMD) algorithm for the decomposition of non–stationary and non–linear signals. Recently in [3] IF has been proved to be convergent for any L2 signal and its stability has been also demonstrated through examples. Furthermore in [3] the so called Fokker–Planck (FP) filters have been introduced. They are smooth at every point and have compact supports. Based on those results, in this paper we introduce the Multidimensional Iterative Filtering (MIF) technique for the decomposition and time–frequency analysis of non–stationary high–dimensional signals. We present the extension of FP filters to higher dimensions. We prove convergence results under general sufficient conditions on the filter shape. Finally we illustrate the promising performance of MIF algorithm, equipped with high–dimensional FP filters, when applied to the decomposition of two dimensional signals.
We study certain classes of local sets of the two-dimensional Gaussian free field (GFF) in a simply connected domain, and their relation to the conformal loop ensemble $\text{CLE}_{4}$ and its variants. More specifically, we consider bounded-type thin local sets (BTLS), where thin means that the local set is small in size, and bounded type means that the harmonic function describing the mean value of the field away from the local set is bounded by some deterministic constant. We show that a local set is a BTLS if and only if it is contained in some nested version of the $\text{CLE}_{4}$ carpet, and prove that all BTLS are necessarily connected to the boundary of the domain. We also construct all possible BTLS for which the corresponding harmonic function takes only two prescribed values and show that all these sets (and this includes the case of $\text{CLE}_{4}$) are in fact measurable functions of the GFF.
We discuss discrete stochastic processes with two independent variables: one is the standard symmetric random walk, and the other is the Poisson process. Convergence of discrete stochastic processes is analysed, such that the symmetric random walk tends to the standard Brownian motion. We show that a discrete analogue of Ito’s formula converges to the corresponding continuous formula.
We consider a class of impulse control problems for general underlying strong Markov processes on the real line, which allows for an explicit solution. The optimal impulse times are shown to be of a threshold type and the optimal threshold is characterised as a solution of a (typically nonlinear) equation. The main ingredient we use is a representation result for excessive functions in terms of expected suprema.
In this paper we study the expected rank problem under full information. Our approach uses the planar Poisson approach from Gnedin (2007) to derive the expected rank of a stopping rule that is one of the simplest nontrivial examples combining rank dependent rules with threshold rules. This rule attains an expected rank lower than the best upper bounds obtained in the literature so far, in particular, we obtain an expected rank of 2.326 14.
We establish conditions for an exponential rate of forgetting of the initial distribution of nonlinear filters in V-norm, allowing for unbounded test functions. The analysis is conducted in an general setup involving nonnegative kernels in a random environment which allows treatment of filters and prediction filters in a single framework. The main result is illustrated on two examples, the first showing that a total variation norm stability result obtained by Douc et al. (2009) can be extended to V-norm without any additional assumptions, the second concerning a situation in which forgetting of the initial condition holds in V-norm for the filters, but the V-norm of each prediction filter is infinite.
We study zero-sum optimal stopping games (Dynkin games) between two players who disagree about the underlying model. In a Markovian setting, a verification result is established showing that if a pair of functions can be found that satisfies some natural conditions then a Nash equilibrium of stopping times is obtained, with the given functions as the corresponding value functions. In general, however, there is no uniqueness of Nash equilibria, and different equilibria give rise to different value functions. As an example, we provide a thorough study of the game version of the American call option under heterogeneous beliefs. Finally, we also study equilibria in randomized stopping times.
In this paper we propose a model for biological neural nets where the activity of the network is described by Hawkes processes having a variable length memory. The particularity in this paper is that we deal with an infinite number of components. We propose a graphical construction of the process and build, by means of a perfect simulation algorithm, a stationary version of the process. To implement this algorithm, we make use of a Kalikow-type decomposition technique. Two models are described in this paper. In the first model, we associate to each edge of the interaction graph a saturation threshold that controls the influence of a neuron on another. In the second model, we impose a structure on the interaction graph leading to a cascade of spike trains. Such structures, where neurons are divided into layers, can be found in the retina.
In this paper a kernel estimator of the differential entropy of the mark distribution of a homogeneous Poisson marked point process is proposed. The marks have an absolutely continuous distribution on a compact Riemannian manifold without boundary. We investigate L2 and the almost surely consistency of this estimator as well as its asymptotic normality.
We consider the numerical approximation of the filtering problem in high dimensions, that is, when the hidden state lies in ℝd with large d. For low-dimensional problems, one of the most popular numerical procedures for consistent inference is the class of approximations termed particle filters or sequential Monte Carlo methods. However, in high dimensions, standard particle filters (e.g. the bootstrap particle filter) can have a cost that is exponential in d for the algorithm to be stable in an appropriate sense. We develop a new particle filter, called the space‒time particle filter, for a specific family of state-space models in discrete time. This new class of particle filters provides consistent Monte Carlo estimates for any fixed d, as do standard particle filters. Moreover, when there is a spatial mixing element in the dimension of the state vector, the space‒time particle filter will scale much better with d than the standard filter for a class of filtering problems. We illustrate this analytically for a model of a simple independent and identically distributed structure and a model of an L-Markovian structure (L≥ 1, L independent of d) in the d-dimensional space direction, when we show that the algorithm exhibits certain stability properties as d increases at a cost 𝒪(nNd2), where n is the time parameter and N is the number of Monte Carlo samples, which are fixed and independent of d. Our theoretical results are also supported by numerical simulations on practical models of complex structures. The results suggest that it is indeed possible to tackle some high-dimensional filtering problems using the space‒time particle filter that standard particle filters cannot handle.
In this paper we identify three questions concerning the management of risk networks with a central branch, which may be solved using the extensive machinery available for one-dimensional risk models. First, we propose a criterion for judging whether a subsidiary is viable by its readiness to pay dividends to the central branch, as reflected by the optimality of the zero-level dividend barrier. Next, for a deterministic central branch which must bailout a single subsidiary each time its surplus becomes negative, we determine the optimal bailout policy, as well as the ruin probability and other risk measures, in closed form. Moreover, we extend these results to the case of hierarchical networks. Finally, for nondeterministic central branches with one subsidiary, we compute approximate risk measures by applying rational approximations, and by using the recently developed matrix scale methodology.
In two recent works, Kuba and Mahmoud (2015a) and (2015b) introduced the family of two-color affine balanced Pólya urn schemes with multiple drawings. We show that, in large-index urns (urn index between ½ and 1) and triangular urns, the martingale tail sum for the number of balls of a given color admits both a Gaussian central limit theorem as well as a law of the iterated logarithm. The laws of the iterated logarithm are new, even in the standard model when only one ball is drawn from the urn in each step (except for the classical Pólya urn model). Finally, we prove that the martingale limits exhibit densities (bounded under suitable assumptions) and exponentially decaying tails. Applications are given in the context of node degrees in random linear recursive trees and random circuits.
In this paper we deal with an optimal stopping problem whose objective is to maximize the probability of selecting k out of the last ℓ successes, given a sequence of independent Bernoulli trials of length N, where k and ℓ are predetermined integers satisfying 1≤k≤ℓ<N. This problem includes some odds problems as special cases, e.g. Bruss’ odds problem, Bruss and Paindaveine’s problem of selecting the last ℓ successes, and Tamaki’s multiplicative odds problem for stopping at any of the last m successes. We show that an optimal stopping rule is obtained by a threshold strategy. We also present the tight lower bound and an asymptotic lower bound for the probability of a win. Interestingly, our asymptotic lower bound is attained by using a variation of the well-known secretary problem, which is a special case of the odds problem. Our approach is based on the application of Newton’s inequalities and optimization technique, which gives a unified view to the previous works.
We study a simple random process in which vertices of a connected graph reach consensus through pairwise interactions. We compute outcome probabilities, which do not depend on the graph structure, and consider the expected time until a consensus is reached. In some cases we are able to show that this is minimised by Kn. We prove an upper bound for the p=0 case and give a family of graphs which asymptotically achieve this bound. In order to obtain the mean of the waiting time we also study a gambler's ruin process with delays. We give the mean absorption time and prove that it monotonically increases with p∈[0,1∕2] for symmetric delays.
We show analogs of the classical arcsine theorem for the occupation time of a random walk in (−∞,0) in the case of a small positive drift. To study the asymptotic behavior of the total time spent in (−∞,0) we consider parametrized classes of random walks, where the convergence of the parameter to 0 implies the convergence of the drift to 0. We begin with shift families, generated by a centered random walk by adding to each step a shift constant a>0 and then letting a tend to 0. Then we study families of associated distributions. In all cases we arrive at the same limiting distribution, which is the distribution of the time spent below 0 of a standard Brownian motion with drift 1. For shift families this is explained by a functional limit theorem. Using fluctuation-theoretic formulae we derive the generating function of the occupation time in closed form, which provides an alternative approach. We also present a new form of the first arcsine law for the Brownian motion with drift.
In the paper we consider the following modification of a discrete-time branching process with stationary immigration. In each generation a binomially distributed subset of the population will be observed. The number of observed individuals constitute a partially observed branching process. After inspection both observed and unobserved individuals may change their offspring distributions. In the subcritical case we investigate the possibility of using the known estimators for the offspring mean and for the mean of the stationary-limiting distribution of the process when the observation of the population sizes is restricted. We prove that, if both the population and the number of immigrants are partially observed, the estimators are still strongly consistent. We also prove that the `skipped' version of the estimator for the offspring mean is asymptotically normal and the estimator of the stationary distribution's mean is asymptotically normal under additional assumptions.
The extremal behaviour of a Markov chain is typically characterised by its tail chain. For asymptotically dependent Markov chains, existing formulations fail to capture the full evolution of the extreme event when the chain moves out of the extreme tail region, and, for asymptotically independent chains, recent results fail to cover well-known asymptotically independent processes, such as Markov processes with a Gaussian copula between consecutive values. We use more sophisticated limiting mechanisms that cover a broader class of asymptotically independent processes than current methods, including an extension of the canonical Heffernan‒Tawn normalisation scheme, and reveal features which existing methods reduce to a degenerate form associated with nonextreme states.
We consider a Cramér–Lundberg insurance risk process with the added feature of reinsurance. If an arriving claim finds the reserve below a certain threshold γ, or if it would bring the reserve below that level, then a reinsurer pays part of the claim. Using fluctuation theory and the theory of scale functions of spectrally negative Lévy processes, we derive expressions for the Laplace transform of the time to ruin and of the joint distribution of the deficit at ruin and the surplus before ruin. We specify these results in much more detail for the threshold set-up in the case of proportional reinsurance.
We are interested in the rate of convergence of a subordinate Markov process to its invariant measure. Given a subordinator and the corresponding Bernstein function (Laplace exponent), we characterize the convergence rate of the subordinate Markov process; the key ingredients are the rate of convergence of the original process and the (inverse of the) Bernstein function. At a technical level, the crucial point is to bound three types of moment (subexponential, algebraic, and logarithmic) for subordinators as time t tends to ∞. We also discuss some concrete models and we show that subordination can dramatically change the speed of convergence to equilibrium.
This paper pioneers a Freidlin–Wentzell approach to stochastic impulse control of exchange rates when the central bank desires to maintain a target zone. Pressure to stimulate the economy forces the bank to implement diffusion monetary policy involving Freidlin–Wentzell perturbations indexed by a parameter ε∈ [0,1]. If ε=0, the policy keeps exchange rates in the target zone for all times t≥0. When ε>0, exchange rates continually exit the target zone almost surely, triggering central bank interventions which force currencies back into the zone or abandonment of all targets. Interventions and target zone deviations are costly, motivating the bank to minimize these joint costs for any ε∈ [0,1]. We prove convergence of the value functions as ε→0 achieving a value function approximation for small ε. Via sample path analysis and cost function bounds, intervention followed by target zone abandonment emerges as the optimal policy.