To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter presents a general approach for evaluating the empirical performance of VaR models. The approach leverages data used in standard VaR backtesting, filtering on a number of selected conditioning variables, to perform tests on specific properties of the model. This simple but general approach can be used to test a wide range of model properties and provide useful information on potential areas of improvement for a VaR model.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
While the Poisson model motivated much of the classical control chart theory for count data, several works note the constraining equi-dispersion assumption. Dispersion must be addressed because over-dispersed data can produce false out-of-control detections when using Poisson limits, while under-dispersed data will produce Poisson limits that are too broad, resulting in potential false negatives and out-of-control states requiring a longer study period for detection. Section 6.1 introduces the Shewhart COM–Poisson control chart, demonstrating its flexibility in assessing in- or out-of-control status, along with advancements made to this chart. These initial works lead to a wellspring of flexible control chart development motivated by the COM–Poisson distribution. Section 6.2 describes a generalized exponentially weighted moving average control chart, and Section 6.3 describes the cumulative sum charts for monitoring COM–Poisson processes. Meanwhile, Section 6.4 introduces generally weighted moving average charts based on the COM-Poisson, and Section 6.5 presents the Conway–Maxwell–Poisson chart via the progressive mean statistic. Finally, the chapter concludes with discussion.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter provides an alternative to exceedance or density-based evaluation of VaR models. This alternative is based on empirical likelihood. We also outline a method to infer the risk exposure, by assessing whether some measure of interval forecast error (e.g., distance between the empirical distribution of the PITs and the posited uniform distribution) is related to some measure of risk exposure, such as the volatility of given risk factors.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter assesses the accuracy and possible misspecification of VaR models and offers a comparison of backtesting results using PITs over exceedances for the same sample of real portfolios. It investigates results from a set of tests used to assess unconditional coverage, conditional coverage, and independence properties of the realized VaR exceptions. This also presents a comprehensive overview of tests used to assess the uniformity and independence properties of a series of PIT estimates generated from real-world risk models. The analysis includes tests based on the empirical CDF (e.g., Kolmogorov–Smirnov; Cramér–Von Mises; and Anderson–Darling) as well as tests of dependence based on regression analysis of observed PITs.
As the Reynolds number increases, the large-eddy simulation (LES) of complex flows becomes increasingly intractable because near-wall turbulent structures become increasingly small. Wall modeling reduces the computational requirements of LES by enabling the use of coarser cells at the walls. This paper presents a machine-learning methodology to develop data-driven wall-shear-stress models that can directly operate, a posteriori, on the unstructured grid of the simulation. The model architecture is based on graph neural networks. The model is trained on a database which includes fully developed boundary layers, adverse pressure gradients, separated boundary layers, and laminar–turbulent transition. The relevance of the trained model is verified a posteriori for the simulation of a channel flow, a backward-facing step and a linear blade cascade.
This chapter introduces the Conway–Maxwell–Poisson regression model, along with adaptations of the model to account for zero-inflation, censoring, and data clustering. Section 5.1 motivates the consideration and development of the various COM–Poisson regressions. Section 5.2 introduces the regression model and discusses related issues including parameter estimation, hypothesis testing, and statistical computing in R. Section 5.3 advances that work to address excess zeroes, while Section 5.4 describes COM–Poisson models that incorporate repeated measures and longitudinal studies. Section 5.5 focuses attention on the R statistical packages and functionality associated with regression analysis that accommodates excess zeros and/or clustered data as described in the two previous sections. Section 5.6 considers a general additive model based on COM–Poisson. Finally, Section 5.7 informs readers of other statistical computing softwares that are also available to conduct COM–Poisson regression, discussing their associated functionality. The chapter concludes with discussion.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
from
15
-
Validation of Risk Aggregation in Economic Capital Models
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
This chapter implements a coherent statistical framework for validation of economic capital models via copula methods using a unique dataset to aggregate credit, market, operational, and interest rate risks. This framework includes benchmarking with alternative copula models and backtesting with alternative penalty functions, in addition to stability and stress tests of economic capital estimates. The analysis is expanded to include the latest supervisory guidance on model validation (i.e. SR11-7) and Basel Accord changes (i.e., Basel III). Second, proprietary confidential loss data is used from major US banks for market risk and operational risk. Lastly, both analytic and visual goodness-of-fit tests for copula models are included. For the data used in this study, the T copula with 4 degree of freedom provides a good statistical fit, superior backtesting performance, reasonable model stability and sufficient sensitivity to stress. In addition, the results provide some support for regulators’ hesitation to recognize diversification benefits by demonstrating a wide range of diversification benefits across risk types under different dependence models.
Edited by
David Lynch, Federal Reserve Board of Governors,Iftekhar Hasan, Fordham University Graduate Schools of Business,Akhtar Siddique, Office of the Comptroller of the Currency
Supervisory applications are rife with examples where models are used and consequently require appropriate validation. This chapter compares and contrasts the notions of model risk and model uncertainty as they relate to both model choice and validation strategy. The chapter uses a decision-theoretic architecture (cf. ch. 7 of Optimal Statistical Decisions by M. DeGroot, 1970) and advances a thesis of how risk model validation can be actuated via utility optimization. An empirical exercise where the aforementioned themes conclude the chapter using a Home Mortgage Disclosure Act (HMDA) data set.