To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Bruss–Robertson–Steele (BRS) inequality bounds the expected number of items of random size which can be packed into a given suitcase. Remarkably, no independence assumptions are needed on the random sizes, which points to a simple explanation; the inequality is the integrated form of an $\omega$-by-$\omega$ inequality, as this note proves.
This paper introduces a flexible risk decomposition method for life insurance contracts embedding several risk factors. Hedging can be naturally embedded in the framework. Although the method is applied to variable annuities in this work, it is also applicable in general to other insurance or financial contracts. The approach relies on applying an allocation principle to components of a Shapley decomposition of the gain and loss. The implementation of the allocation method requires the use of a stochastic on stochastic algorithm involving nested simulations. Numerical examples studying the relative impact of equity, interest rate and mortality risk for guaranteed minimal maturity benefit (GMMB) policies conclude our analysis.
The model uncertainty issue is pervasive in virtually all applied fields but especially critical in insurance and finance. To hedge against the uncertainty of the underlying probability distribution, which we refer to as ambiguity, the worst case is often considered in quantifying the underlying risk. However, this worst-case treatment often yields results that are overly conservative. We argue that, in most practical situations, a generic risk is realized from multiple scenarios and the risk in some ordinary scenarios may be subject to negligible ambiguity so that it is safe to trust the reference distributions. Hence, we only need to consider the worst case in the other scenarios where ambiguity is significant. We implement this idea in the study of the worst-case moments of a risk in the hope to alleviate the over-conservativeness issue. Note that the ambiguity in our consideration exists in both the scenario indicator and the risk in the corresponding scenario, leading to a two-fold ambiguity issue. We employ the Wasserstein distance to construct an ambiguity ball. Then, we disentangle the ambiguity along the scenario indicator and the risk in the corresponding scenario, so that we convert the two-fold optimization problem into two one-fold problems. Our main result is a closed-form worst-case moment estimate. Our numerical studies illustrate that the consideration of partial ambiguity indeed greatly alleviates the over-conservativeness issue.
We introduce a new measure of inaccuracy based on extropy between distributions of the nth upper (lower) record value and parent random variable and discuss some properties of it. A characterization problem for the proposed extropy inaccuracy measure has been studied. It is also shown that the defined measure of inaccuracy is invariant under scale but not under location transformation. We characterize certain specific lifetime distribution functions. Nonparametric estimators based on the empirical and kernel methods for the proposed measures are also obtained. The performance of estimators is also discussed using a real dataset.
We consider a model of binary opinion dynamics where one opinion is inherently “superior” than the other, and social agents exhibit a “bias” toward the superior alternative. Specifically, it is assumed that an agent updates its choice to the superior alternative with probability α > 0 irrespective of its current opinion and opinions of other agents. With probability $1-\alpha$, it adopts majority opinion among two randomly sampled neighbors and itself. We are interested in the time it takes for the network to converge to a consensus on the superior alternative. In a complete graph of size n, we show that irrespective of the initial configuration of the network, the average time to reach consensus scales as $\Theta(n\,\log n)$ when the bias parameter α is sufficiently high, that is, $\alpha \gt \alpha_c$ where αc is a threshold parameter that is uniquely characterized. When the bias is low, that is, when $\alpha \in (0,\alpha_c]$, we show that the same rate of convergence can only be achieved if the initial proportion of agents with the superior opinion is above certain threshold $p_c(\alpha)$. If this is not the case, then we show that the network takes $\Omega(\exp(\Theta(n)))$ time on average to reach consensus.
Consider Bernoulli bond percolation on a graph nicely embedded in hyperbolic space $\mathbb{H}^d$ in such a way that it admits a transitive action by isometries of $\mathbb{H}^d$. Let $p_{\text{a}}$ be the supremum of all percolation parameters such that no point at infinity of $\mathbb{H}^d$ lies in the boundary of the cluster of a fixed vertex with positive probability. Then for any parameter $p < p_{\text{a}}$, almost surely every percolation cluster is thin-ended, i.e. has only one-point boundaries of ends.
from
12
-
Validation of Models Used by Banks to Estimate Their Allowance for Loan and Lease Losses
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter describes how validation can be carried out for models used in investment management. This chapter also describes what differentiates investment management models from other models and ends with conceptual framework for validating a few investment management risk models.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
Survival analysis studies the time-to-event for various subjects. In the biological and medical sciences, interest can focus on patient time to death due to various (competing) causes. In engineering reliability, one may study the time to component failure due to analogous factors or stimuli. Cure rate models serve a particular interest because, with advancements in associated disciplines, subjects can be viewed as “cured meaning that they do not show any recurrence of a disease (in biomedical studies) or subsequent manufacturing error (in engineering) following a treatment. This chapter generalizes two classical cure-rate models via the development of a COM–Poisson cure rate model. The chapter first describes the COM–Poisson cure rate model framework and general notation, and then details the model framework assuming right and interval censoring, respectively. The chapter then describes the broader destructive COM–Poisson cure rate model which allows for the number of competing risks to diminish via damage or eradication. Finally, the chapter details the various lifetime distributions considered in the literature to date for COM–Poisson-based cure rate modeling.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter focuses on potential issues bank examiners may face while reviewing the quality of a bank’s model risk management practices and validating modeling methodologies used to estimate allowance for loan and lease losses (henceforth ALLL). It discusses both leading and lagging practices in modeling and validating ALLL and examines upcoming challenges in implementing the new standards for allowance computations. In the context of validating ALLL methodologies under the new accounting standards, the author discusses the challenges in forecasting payoffs on existing credit card balances, issues relating to forecasting the economy and long-term losses, issues relating to the application of discounting in ALLL computations etc.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter examines how banks’ Value-at-Risk (VaR) models performed during the COVID-19 crisis using regulatory trading desk-level data. It first evaluates whether banks’ VaR models were incomplete by checking whether various factors predict backtesting exceptions. Backtesting exceptions from the past ten business days and the level of the VIX forecast future exceptions. Predictability from past backtesting exceptions rises during the COVID-19 crisis relative to 2019. The results do not find any single market factor that related to contemporaneous backtesting exceptions. These results hold both in the aggregate and across asset classes.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency