To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
from
15
-
Validation of Risk Aggregation in Economic Capital Models
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
from
12
-
Validation of Models Used by Banks to Estimate Their Allowance for Loan and Lease Losses
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
The Conway–Maxwell–Poisson distribution has garnered interest in and development of other flexible alternatives to classical distributions. This chapter introduces various distributional extensions and generalities motivated by functions of COM–Poisson random variables, including Conway–Maxwell-inspired generalizations of the Skellam distribution, binomial distribution, negative binomial distribution, the Katz class of distributions, two flexible series system life length distributions, and generalizations of the negative hypergeometric distribution.
This chapter considers various models that focus largely on serially dependent variables and the respective methodologies developed with a COM–Poisson underpinning. This chapter first introduces the reader to the various stochastic processes that have been established, including a homogeneous COM–Poisson process, a copula-based COM–Poisson Markov model, and a COM–Poisson hidden Markov model. Meanwhile, there are two approaches for conducting time series analysis on time-dependent count data. One approach assumes that the time dependence occurs with respect to the intensity vector. Under this framework, the usual time series models that assume a continuous variable can be applied. Alternatively, the time series model can be applied directly to the outcomes themselves. Maintaining the discrete nature of the observations, however, requires a different approach referred to as a thinning-based method. Different thinning-based operators can be considered for such models. The chapter then broadens the discussion of dependence to consider COM–Poisson-based spatio-temporal models, thus allowing both for serial and spatial dependence among variables.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter examines wholesale credit risk models and their validation at US banking institutions. The most common practice in wholesale credit risk modeling for loss estimation among large US banking institutions today is to use expected loss models, typically at the loan level. The chapter discusses the quantification and validation of three key risk parameters in this modeling approach, namely, probability of default (PD), loss given default (LGD), and exposure at default (EAD).
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter provides an overview of the validation of models that are used in interest rate risk of the banking book (IRRBB). These includes models used for Funds Transfer Pricing (FTP) as well as asset–liability management (ALM). FTP is a charge (for assets) or a credit (for liabilities) that is charged (credited) by the corporate treasury to the business unit in order to isolate the business unit from market interest rate fluctuations for the life of the asset (liability). ALM involves modeling of principal and interest cash flows – positive cash flows for assets and negative cash flows for liabilities.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
We study a stochastic differential equation with an unbounded drift and general Hölder continuous noise of order $\lambda \in (0,1)$. The corresponding equation turns out to have a unique solution that, depending on a particular shape of the drift, either stays above some continuous function or has continuous upper and lower bounds. Under some mild assumptions on the noise, we prove that the solution has moments of all orders. In addition, we provide its connection to the solution of some Skorokhod reflection problem. As an illustration of our results and motivation for applications, we also suggest two stochastic volatility models which we regard as generalizations of the CIR and CEV processes. We complete the study by providing a numerical scheme for the solution.
This paper deals with ergodic theorems for particular time-inhomogeneous Markov processes, whose time-inhomogeneity is asymptotically periodic. Under a Lyapunov/minorization condition, it is shown that, for any measurable bounded function f, the time average $\frac{1}{t} \int_0^t f(X_s)ds$ converges in $\mathbb{L}^2$ towards a limiting distribution, starting from any initial distribution for the process $(X_t)_{t \geq 0}$. This convergence can be improved to an almost sure convergence under an additional assumption on the initial measure. This result is then applied to show the existence of a quasi-ergodic distribution for processes absorbed by an asymptotically periodic moving boundary, satisfying a conditional Doeblin condition.
To describe the trend of cumulative incidence of coronavirus disease 19 (COVID-19) and undiagnosed cases over the pandemic through the emergence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants among healthcare workers in Tokyo, we analysed data of repeated serological surveys and in-house COVID-19 registry among the staff of National Center for Global Health and Medicine. Participants were asked to donate venous blood and complete a survey questionnaire about COVID-19 diagnosis and vaccine. Positive serology was defined as being positive on Roche or Abbott assay against SARS-CoV-2 nucleocapsid protein, and cumulative infection was defined as either being seropositive or having a history of COVID-19. Cumulative infection has increased from 2.0% in June 2021 (pre-Delta) to 5.3% in December 2021 (post-Delta). After the emergence of the Omicron, it has increased substantially during 2022 (16.9% in June and 39.0% in December). As of December 2022, 30% of those who were infected in the past were not aware of their infection. Results indicate that SARS-CoV-2 infection has rapidly expanded during the Omicron-variant epidemic among healthcare workers in Tokyo and that a sizable number of infections were undiagnosed.
Two-part framework and the Tweedie generalized linear model (GLM) have traditionally been used to model loss costs for short-term insurance contracts. For most portfolios of insurance claims, there is typically a large proportion of zero claims that leads to imbalances, resulting in lower prediction accuracy of these traditional approaches. In this article, we propose the use of tree-based methods with a hybrid structure that involves a two-step algorithm as an alternative approach. For example, the first step is the construction of a classification tree to build the probability model for claim frequency. The second step is the application of elastic net regression models at each terminal node from the classification tree to build the distribution models for claim severity. This hybrid structure captures the benefits of tuning hyperparameters at each step of the algorithm; this allows for improved prediction accuracy, and tuning can be performed to meet specific business objectives. An obvious major advantage of this hybrid structure is improved model interpretability. We examine and compare the predictive performance of this hybrid structure relative to the traditional Tweedie GLM using both simulated and real datasets. Our empirical results show that these hybrid tree-based methods produce more accurate and informative predictions.
We present an efficient algorithm to generate a discrete uniform distribution on a set of p elements using a biased random source for p prime. The algorithm generalizes Von Neumann’s method and improves the computational efficiency of Dijkstra’s method. In addition, the algorithm is extended to generate a discrete uniform distribution on any finite set based on the prime factorization of integers. The average running time of the proposed algorithm is overall sublinear: $\operatorname{O}\!(n/\log n)$.
Multilayer networks are in the focus of the current complex network study. In such networks, multiple types of links may exist as well as many attributes for nodes. To fully use multilayer—and other types of complex networks in applications, the merging of various data with topological information renders a powerful analysis. First, we suggest a simple way of representing network data in a data matrix where rows correspond to the nodes and columns correspond to the data items. The number of columns is allowed to be arbitrary, so that the data matrix can be easily expanded by adding columns. The data matrix can be chosen according to targets of the analysis and may vary a lot from case to case. Next, we partition the rows of the data matrix into communities using a method which allows maximal compression of the data matrix. For compressing a data matrix, we suggest to extend so-called regular decomposition method for non-square matrices. We illustrate our method for several types of data matrices, in particular, distance matrices, and matrices obtained by augmenting a distance matrix by a column of node degrees, or by concatenating several distance matrices corresponding to layers of a multilayer network. We illustrate our method with synthetic power-law graphs and two real networks: an Internet autonomous systems graph and a world airline graph. We compare the outputs of different community recovery methods on these graphs and discuss how incorporating node degrees as a separate column to the data matrix leads our method to identify community structures well-aligned with tiered hierarchical structures commonly encountered in complex scale-free networks.