To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A set of data with positive values follows a Pareto distribution if the log–log plot of value versus rank is approximately a straight line. A Pareto distribution satisfies Zipf’s law if the log–log plot has a slope of $-1$. Since many types of ranked data follow Zipf’s law, it is considered a form of universality. We propose a mathematical explanation for this phenomenon based on Atlas models and first-order models, systems of strictly positive continuous semimartingales with parameters that depend only on rank. We show that the stationary distribution of an Atlas model will follow Zipf’s law if and only if two natural conditions, conservation and completeness, are satisfied. Since Atlas models and first-order models can be constructed to approximate systems of time-dependent rank-based data, our results can explain the universality of Zipf’s law for such systems. However, ranked data generated by other means may follow non-Zipfian Pareto distributions. Hence, our results explain why Zipf’s law holds for word frequency, firm size, household wealth, and city size, while it does not hold for earthquake magnitude, cumulative book sales, and the intensity of wars, all of which follow non-Zipfian Pareto distributions.
This paper investigates asset-liability management problems in a continuous-time economy. When the financial market consists of cointegrated risky assets, institutional investors attempt to make profit from the cointegration feature on the one hand, while on the other hand they need to maintain a stable surplus level, that is, the company’s wealth less its liability. Challenges occur when the liability is random and cannot be fully financed or hedged through the financial market. For mean–variance investors, an additional concern is the rational time-consistency issue, which ensures that a decision made in the future will not be restricted by the current surplus level. By putting all these factors together, this paper derives a closed-form feedback equilibrium control for time-consistent mean–variance asset-liability management problems with cointegrated risky assets. The solution is built upon the Hamilton–Jacobi–Bellman framework addressing time inconsistency.
Supra-Bayesianism is the Bayesian response to learning the opinions of others. Probability pooling constitutes an alternative response. One natural question is whether there are cases where probability pooling gives the supra-Bayesian result. This has been called the problem of Bayes-compatibility for pooling functions. It is known that in a common prior setting, under standard assumptions, linear pooling cannot be nontrivially Bayes-compatible. We show by contrast that geometric pooling can be nontrivially Bayes-compatible. Indeed, we show that, under certain assumptions, geometric and Bayes-compatible pooling are equivalent. Granting supra-Bayesianism its usual normative status, one upshot of our study is thus that, in a certain class of epistemic contexts, geometric pooling enjoys a normative advantage over linear pooling as a social learning mechanism. We discuss the philosophical ramifications of this advantage, which we show to be robust to variations in our statement of the Bayes-compatibility problem.
A vexing question in Bayesian epistemology is how an agent should update on evidence which she assigned zero prior credence. Some theorists have suggested that, in such cases, the agent should update by Kolmogorov conditionalization, a norm based on Kolmogorov’s theory of regular conditional distributions. However, it turns out that in some situations, a Kolmogorov conditionalizer will plan to always assign a posterior credence of zero to the evidence she learns. Intuitively, such a plan is irrational and easily Dutch bookable. In this paper, we propose a revised norm, Kolmogorov–Blackwell conditionalization, which avoids this problem. We prove a Dutch book theorem and converse Dutch book theorem for this revised norm, and relate our results to those of Rescorla (2018).
We investigate the probability that an insurance portfolio gets ruined within a finite time period under the assumption that the r largest claims are (partly) reinsured. We show that for regularly varying claim sizes the probability of ruin after reinsurance is also regularly varying in terms of the initial capital, and derive an explicit asymptotic expression for the latter. We establish this result by leveraging recent developments on sample-path large deviations for heavy tails. Our results allow, on the asymptotic level, for an explicit comparison between two well-known large-claim reinsurance contracts, namely LCR and ECOMOR. Finally, we assess the accuracy of the resulting approximations using state-of-the-art rare event simulation techniques.
We give a dynamic extension result of the (static) notion of a deviation measure. We also study distribution-invariant deviation measures and show that the only dynamic deviation measure which is law invariant and recursive is the variance.
We extend previous large deviations results for the randomised Heston model to the case of moderate deviations. The proofs involve the Gärtner–Ellis theorem and sharp large deviations tools.
We find explicit estimates for the exponential rate of long-term convergence for the ruin probability in a level-dependent Lévy-driven risk model, as time goes to infinity. Siegmund duality allows us to reduce the problem to long-term convergence of a reflected jump-diffusion to its stationary distribution, which is handled via Lyapunov functions.
Two mathematical models under so-called intensity and structure frameworks to pricing a double defaultable interest rate swap are established. The default could happen or jump to a high probability in both fixed and floating parties on the predetermined boundaries. The models lead to a new and interesting mathematical problem. As the intensity approaches infinity in designated regions, the solutions of the intensity models converge to a solution of a structure-type model which is an initial value problem of a partial differential equation coupled with two obstacles problem in their restricted regions. According to the value of the fixed rate, three cases are discussed. The free boundary that determines the swap rate and the free boundaries that determine the earlier termination of the contract (due to counterparty’s default) are analysed.
We consider de Finetti’s problem for spectrally one-sided Lévy risk models with control strategies that are absolutely continuous with respect to the Lebesgue measure. Furthermore, we consider the version with a constraint on the time of ruin. To characterize the solution to the aforementioned models, we first solve the optimal dividend problem with a terminal value at ruin and show the optimality of threshold strategies. Next, we introduce the dual Lagrangian problem and show that the complementary slackness conditions are satisfied, characterizing the optimal Lagrange multiplier. Finally, we illustrate our findings with a series of numerical examples.
We consider decision problems with arbitrary action spaces, deterministic transitions, and infinite time horizon. In the usual setup when probability measures are countably additive, a general version of Kuhn’s theorem implies under fairly general conditions that for every mixed strategy of the decision maker there exists an equivalent behavior strategy. We examine to what extent this remains valid when probability measures are only assumed to be finitely additive. Under the classical approach of Dubins and Savage (2014), we prove the following statements: (1) If the action space is finite, every mixed strategy has an equivalent behavior strategy. (2) Even if the action space is infinite, at least one optimal mixed strategy has an equivalent behavior strategy. The approach by Dubins and Savage turns out to be essentially maximal: these two statements are no longer valid if we take any extension of their approach that considers all singleton plays.
We study the asymptotic distribution of the total claim amount for marked Poisson cluster models. The marks determine the size and other characteristics of the individual claims and potentially influence the arrival rate of future claims. We find sufficient conditions under which the total claim amount satisfies the central limit theorem or, alternatively, tends in distribution to an infinite-variance stable random variable. We discuss several Poisson cluster models in detail, paying special attention to the marked Hawkes process as our key example.
In this paper we develop the theory of the W and Z scale functions for right-continuous (upwards skip-free) discrete-time, discrete-space random walks, along the lines of the analogous theory for spectrally negative Lévy processes. Notably, we introduce for the first time in this context the one- and two-parameter scale functions Z, which appear for example in the joint deficit at ruin and time of ruin problems of actuarial science. Comparisons are made between the various theories of scale functions as one makes time and/or space continuous.
We introduce a model for the execution of large market orders in limit order books, and use a linear combination of self-exciting Hawkes processes to model asset-price dynamics, with the addition of a price-impact function that is concave in the order size. A criterion for a general price-impact function is introduced, which is used to show how specification of a concave impact function affects order execution. Using our model, we examine the immediate and permanent impacts of large orders, analyse the potential for price manipulation, and show the effectiveness of the time-weighted average price strategy. Our model shows that price depends on the balance between the intensities of the Hawkes process, which can be interpreted as a dependence on order-flow imbalance.
In this paper, we extend the work by Sato devoted to the development of economic growth models within the framework of the Lie group theory. We propose a new growth model based on the assumption of logistic growth in factors and derive the corresponding production functions, as well as a compatible notion of wage share. In the process, it is shown that the new functions compare reasonably well against relevant economic data. The corresponding problem of maximisation of profit under conditions of perfect competition is solved with the aid of one of these functions. In addition, it is explained in reasonably rigorous mathematical terms why Bowley’s law no longer holds true in the post-1960 data.
De Finetti’s optimal dividend problem has recently been extended to the case when dividend payments can be made only at Poisson arrival times. In this paper we consider the version with bail-outs where the surplus must be nonnegative uniformly in time. For a general spectrally negative Lévy model, we show the optimality of a Parisian-classical reflection strategy that pays the excess above a given barrier at each Poisson arrival time and also reflects from below at 0 in the classical sense.
We study a one-parameter class of examples of optimal transport problems between a two-dimensional source and a one-dimensional target. Our earlier work identified a nestedness condition on the surplus function and marginals, under which it is possible to solve the problem semi-explicitly. In the family of examples we consider, we classify the values of parameters which lead to nestedness. In those cases, we derive an almost explicit characterisation of the solution.
In this paper we study a finite-fuel two-dimensional degenerate singular stochastic control problem under regime switching motivated by the optimal irreversible extraction problem of an exhaustible commodity. A company extracts a natural resource from a reserve with finite capacity and sells it in the market at a spot price that evolves according to a Brownian motion with volatility modulated by a two-state Markov chain. In this setting, the company aims at finding the extraction rule that maximizes its expected discounted cash flow, net of the costs of extraction and maintenance of the reserve. We provide expressions for both the value function and the optimal control. On the one hand, if the running cost for the maintenance of the reserve is a convex function of the reserve level, the optimal extraction rule prescribes a Skorokhod reflection of the (optimally) controlled state process at a certain state and price-dependent threshold. On the other hand, in the presence of a concave running cost function, it is optimal to instantaneously deplete the reserve at the time at which the commodity's price exceeds an endogenously determined critical level. In both cases, the threshold triggering the optimal control is given in terms of the optimal stopping boundary of an auxiliary family of perpetual optimal selling problems with regime switching.
In this paper we are concerned with a model in econophysics, the subfield of statistical physics that applies concepts from traditional physics to economics. Our model is an example of an interacting particle system with disorder, meaning that some of the transition rates are not identical but rather drawn from a fixed distribution. Economical agents are represented by the vertices of a connected graph and are characterized by the number of coins they possess. Agents independently spend one coin at rate one for their basic need, earn one coin at a rate chosen independently from a distribution ϕ, and exchange money at rate µ with one of their nearest neighbors, with the richest neighbor giving one coin to the other neighbor. If an agent needs to spend one coin when his/her fortune is at 0, he/she dies, i.e. the corresponding vertex is removed from the graph. Our first results focus on the two extreme cases of lack of cooperation µ=0 and perfect cooperation µ = ∞ for finite connected graphs. These results suggest that, when overall the agents earn more than they spend, cooperation is beneficial for the survival of the population, whereas when overall the agents earn less than they spend, cooperation becomes detrimental. We also study the infinite one-dimensional system. In this case, when the agents earn less than they spend on average, the density of agents that die eventually is bounded from below by a positive constant that does not depend on the initial number of coins per agent or the level of cooperation.
We establish a “top-down” approximation scheme to approximate loss distributions of reinsurance products and Insurance-Linked Securities based on three input parameters, namely the Attachment Probability, Expected Loss and Exhaustion Probability. Our method is rigorously derived by utilizing a classical result from Extreme-Value Theory, the Pickands–Balkema–de Haan theorem. The robustness of the scheme is demonstrated by proving sharp error-bounds for the approximated curves with respect to the supremum and L2 norms. The practical implications of our findings are examined by applying it to Industry Loss Warranties: the method performs very accurately for each transaction. Our approach can be used in a variety of applications such as vendor model blending, portfolio optimization and premium calculation.