We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Importance sampling has become an important tool for the computation of extreme quantiles and tail-based risk measures. For estimation of such nonlinear functionals of the underlying distribution, the standard efficiency analysis is not necessarily applicable. In this paper we therefore study importance sampling algorithms by considering moderate deviations of the associated weighted empirical processes. Using a delta method for large deviations, combined with classical large deviation techniques, the moderate deviation principle is obtained for importance sampling estimators of two of the most common risk measures: value at risk and expected shortfall.
Model and parameter uncertainties are common whenever some parametric model is selected to value a derivative instrument. Combining the Monte Carlo method with the Smolyak interpolation algorithm, we propose an accurate efficient numerical procedure to quantify the uncertainty embedded in complex derivatives. Except for the value function being sufficiently smooth with respect to the model parameters, there are no requirements on the payoff or candidate models. Numerical tests carried out quantify the uncertainty of Bermudan put options and down-and-out put options under the Heston model, with each model parameter specified in an interval.
Amethod for non-rigid image registration that is suitable for large deformations is presented. Conventional registration methods embed the image in a B-spline object, and the image is evolved by deforming the B-spline object. In this work, we represent the image using B-spline and deform the image using a composition approach. We also derive a computationally efficient algorithm for calculating the B-spline coefficients and gradients of the image by adopting ideas from signal processing using image filters. We demonstrate the application of our method on several different types of 2D and 3D images and compare it with existing methods.
We discuss modelling and simulation of volumetric rainfall in a catchment of the Murray–Darling Basin – an important food production region in Australia that was seriously affected by a recent prolonged drought. Consequently, there has been sustained interest in development of improved water management policies. In order to model accumulated volumetric catchment rainfall over a fixed time period, it is necessary to sum weighted rainfall depths at representative sites within each sub-catchment. Since sub-catchment rainfall may be highly correlated, the use of a Gamma distribution to model rainfall at each site means that catchment rainfall is expressed as a sum of correlated Gamma random variables. We compare four different models and conclude that a joint probability distribution for catchment rainfall constructed by using a copula of maximum entropy is the most effective.
In this paper we study a discrete-time optimal switching problem on a finite horizon. The underlying model has a running reward, terminal reward, and signed (positive and negative) switching costs. Using optimal stopping theory for discrete-parameter stochastic processes, we extend a well-known explicit dynamic programming method for computing the value function and the optimal strategy to the case of signed switching costs.
This article addresses the resolution of the inverse problem for the parameter identification in orthotropic materials with a number of measurements merely on the boundaries. The inverse problem is formulated as an optimization problem of a residual functional which evaluates the differences between the experimental and predicted displacements. The singular boundary method, an integration-free, mathematically simple and boundary-only meshless method, is employed to numerically determine the predicted displacements. The residual functional is minimized by the Levenberg-Marquardt method. Three numerical examples are carried out to illustrate the robustness, efficiency, and accuracy of the proposed scheme. In addition, different levels of noise are added into the boundary conditions to verify the stability of the present methodology.
In clinical trials with two treatment arms, Efron's biased coin design, Efron (1971), sequentially assigns a patient to the underrepresented arm with probability p > ½. Under this design the proportion of patients in any arm converges to ½, and the convergence rate is n-1, as opposed to n-½ under some other popular designs. The generalization of Efron's design to K ≥ 2 arms and an unequal target allocation ratio (q1, . . ., qK) can be found in some papers, most of which determine the allocation probabilities ps in a heuristic way. Nonetheless, it has been noted that by using inappropriate ps, the proportion of patients in the K arms never converges to the target ratio. We develop a general theory to answer the question of what allocation probabilities ensure that the realized proportions under a generalized design still converge to the target ratio (q1, . . ., qK) with rate n-1.
In this paper the optimal dividend (subject to transaction costs) and reinsurance (with two reinsurers) problem is studied in the limit diffusion setting. It is assumed that transaction costs and taxes are required when dividends occur, and that the premiums charged by two reinsurers are calculated according to the exponential premium principle with different parameters, which makes the stochastic control problem nonlinear. The objective of the insurer is to determine the optimal reinsurance and dividend policy so as to maximize the expected discounted dividends until ruin. The problem is formulated as a mixed classical-impulse stochastic control problem. Explicit expressions for the value function and the corresponding optimal strategy are obtained. Finally, a numerical example is presented to illustrate the impact of the parameters associated with the two reinsurers' premium principle on the optimal reinsurance strategy.
In this paper we define and study a new class of multivariate counting processes, named `multivariate generalized Pólya process'. Initially, we define and study the bivariate generalized Pólya process and briefly discuss its reliability application. In order to derive the main properties of the process, we suggest some key properties and an important characterization of the process. Due to these properties and the characterization, the main properties of the bivariate generalized Pólya process are obtained efficiently. The marginal processes of the multivariate generalized Pólya process are shown to be the univariate generalized Pólya processes studied in Cha (2014). Given the history of a marginal process, the conditional property of the other process is also discussed. The bivariate generalized Pólya process is extended to the multivariate case. We define a new dependence concept for multivariate point processes and, based on it, we analyze the dependence structure of the multivariate generalized Pólya process.
We derive an analytical solution for the value of Parisian up-and-in calls by using the “moving window” technique for pricing European-style Parisian up-and-out calls. Our pricing formula can be applied to both European-style and American-style Parisian up-and-in calls, due to the fact that with an “in” barrier, the option holder cannot do or decide on anything before the option is activated, and once the option is activated it is just a plain vanilla call, which could be of American style or European style.
We consider a stochastic evolutionary model for a phenotype developing amongst n related species with unknown phylogeny. The unknown tree is modelled by a Yule process conditioned on n contemporary nodes. The trait value is assumed to evolve along lineages as an Ornstein-Uhlenbeck process. As a result, the trait values of the n species form a sample with dependent observations. We establish three limit theorems for the sample mean corresponding to three domains for the adaptation rate. In the case of fast adaptation, we show that for large n the normalized sample mean is approximately normally distributed. Using these limit theorems, we develop novel confidence interval formulae for the optimal trait value.
Fractional Lévy processes generalize fractional Brownian motion in a natural way. We go a step further and extend the usual fractional Riemann-Liouville kernel to a regularly varying function. We call the resulting stochastic processes generalized fractional Lévy processes (GFLPs) and show that they may have short or long memory increments and that their sample paths may have jumps or not. Moreover, we define stochastic integrals with respect to a GFLP and investigate their second-order structure and sample path properties. A specific example is the Ornstein-Uhlenbeck process driven by a time-scaled GFLP. We prove a functional central limit theorem for such scaled processes with a fractional Ornstein-Uhlenbeck process as a limit process. This approximation applies to a wide class of stochastic volatility models, which include models where possibly neither the data nor the latent volatility process are semimartingales.
In this paper a stochastic failure model for a system with stochastically dependent competing failures is analyzed. The system is subject to two types of failure: degradation failure and catastrophic failure. Both types of failure share an initial common source: an external shock process. This implies that they are stochastically dependent. In our developments of the model, the type of dependency between the two kinds of failure will be characterized. Conditional properties of the two competing risks are also investigated. These properties are the fundamental basis for the development of the maintenance strategy studied in this paper. Considering this maintenance strategy, the long-run average cost rate is derived and the optimal maintenance policy is discussed.
Burn-in is a method of ‘elimination’ of initial failures (infant mortality). In the conventional burn-in procedures, to burn-in a component or a system means to subject it to a fixed time period of simulated use prior to actual operation. Then those which fail during the burn-in procedure are scrapped and only those which survived the burn-in procedure are considered to be of satisfactory quality. Thus, in this case, the only information used for the elimination procedure is the lifetime of the corresponding item. In this paper we consider a new burn-in procedure which additionally employs a dependent covariate process in the elimination procedure. Through the comparison with the conventional burn-in procedure, we show that the new burn-in procedure is preferable under commonly satisfied conditions. The problem of determining the optimal burn-in parameters is also considered and the properties of the optimal parameters are derived. A numerical example is provided to illustrate the theoretical results obtained in this paper.
We introduce a notion of kth order stochastic monotonicity and duality that allows us to unify the notion used in insurance mathematics (sometimes refereed to as Siegmund's duality) for the study of ruin probability and the duality responsible for the so-called put-call symmetries in option pricing. Our general kth order duality can be interpreted financially as put-call symmetry for powered options. The main objective of this paper is to develop an effective analytic approach to the analysis of duality that will lead to the full characterization of kth order duality of Markov processes in terms of their generators, which is new even for the well-studied case of put-call symmetries.
In this paper some important properties of the generalized Pólya process are derived and their applications are discussed. The generalized Pólya process is defined based on the stochastic intensity. By interpreting the defined stochastic intensity of the generalized Pólya process, the restarting property of the process is discussed. Based on the restarting property of the process, the joint distribution of the number of events is derived and the conditional joint distribution of the arrival times is also obtained. In addition, some properties of the compound process defined for the generalized Pólya process are derived. Furthermore, a new type of repair is defined based on the process and its application to the area of reliability is discussed. Several examples illustrating the applications of the obtained properties to various areas are suggested.