To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We extend the classical setting of an optimal stopping problem under full information to include problems with an unknown state. The framework allows the unknown state to influence (i) the drift of the underlying process, (ii) the payoff functions, and (iii) the distribution of the time horizon. Since the stopper is assumed to observe the underlying process and the random horizon, this is a two-source learning problem. Assigning a prior distribution for the unknown state, standard filtering theory can be employed to embed the problem in a Markovian framework with one additional state variable representing the posterior of the unknown state. We provide a convenient formulation of this Markovian problem, based on a measure change technique that decouples the underlying process from the new state variable. Moreover, we show by means of several novel examples that this reduced formulation can be used to solve problems explicitly.
In this article we consider the estimation of the log-normalization constant associated to a class of continuous-time filtering models. In particular, we consider ensemble Kalman–Bucy filter estimates based upon several nonlinear Kalman–Bucy diffusions. Using new conditional bias results for the mean of the aforementioned methods, we analyze the empirical log-scale normalization constants in terms of their $\mathbb{L}_n$-errors and $\mathbb{L}_n$-conditional bias. Depending on the type of nonlinear Kalman–Bucy diffusion, we show that these are bounded above by terms such as $\mathsf{C}(n)\left[t^{1/2}/N^{1/2} + t/N\right]$ or $\mathsf{C}(n)/N^{1/2}$ ($\mathbb{L}_n$-errors) and $\mathsf{C}(n)\left[t+t^{1/2}\right]/N$ or $\mathsf{C}(n)/N$ ($\mathbb{L}_n$-conditional bias), where t is the time horizon, N is the ensemble size, and $\mathsf{C}(n)$ is a constant that depends only on n, not on N or t. Finally, we use these results for online static parameter estimation for the above filtering models and implement the methodology for both linear and nonlinear models.
Full likelihood inference under Kingman's coalescent is a computationally challenging problem to which importance sampling (IS) and the product of approximate conditionals (PAC) methods have been applied successfully. Both methods can be expressed in terms of families of intractable conditional sampling distributions (CSDs), and rely on principled approximations for accurate inference. Recently, more general Λ- and Ξ-coalescents have been observed to provide better modelling fits to some genetic data sets. We derive families of approximate CSDs for finite sites Λ- and Ξ-coalescents, and use them to obtain ‘approximately optimal’ IS and PAC algorithms for Λ-coalescents, yielding substantial gains in efficiency over existing methods.
Particle filters are Monte Carlo methods that aim to approximate the optimal filter of a partially observed Markov chain. In this paper, we study the case in which the transition kernel of the Markov chain depends on unknown parameters: we construct a particle filter for the simultaneous estimation of the parameter and the partially observed Markov chain (adaptive estimation) and we prove the convergence of this filter to the correct optimal filter, as time and the number of particles go to infinity. The filter presented here generalizes Del Moral's Monte Carlo particle filter.
Hoeffding's inequality can be used in conjunction with the declared parameters of a traffic source, such as its peak rate, to obtain confidence intervals for measurements of the traffic's effective bandwidth. We describe a variety of interval-estimation procedures based on this idea, designed to provide differing degrees of robustness against non-stationarity. We also discuss how to compute confidence intervals for the effective bandwidth of an aggregate of traffic sources.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.