Operational Risk is one of the most difficult risks to model. It is a large and diverse category covering anything from cyber losses to mis-selling fines; and from processing errors to HR issues. Data is usually lacking, particularly for low frequency, high impact losses, and consequently there can be a heavy reliance on expert judgement. This paper seeks to help actuaries and other risk professionals tasked with the challenge of validating models of operational risks. It covers the loss distribution and scenario-based approaches most commonly used to model operational risks, as well as Bayesian Networks. It aims to give a comprehensive yet practical guide to how one may validate each of these and provide assurance that the model is appropriate for a firm’s operational risk profile.
]]>Solvency II requires that firms with Internal Models derive the Solvency Capital Requirement directly from the probability distribution forecast generated by the Internal Model. A number of UK insurance undertakings do this via an aggregation model consisting of proxy models and a copula. Since 2016 there have been a number of industry surveys on the application of these models, with the 2019 Prudential Regulation Authority (“PRA”) led industry wide thematic review identifying a number of areas of enhancement. This concluded that there was currently no uniform best practice. While there have been many competing priorities for insurers since 2019, the Working Party expects that firms will have either already made changes to their proxy modelling approach in light of the PRA survey, or will have plans to do so in the coming years. This paper takes the PRA feedback into account and explores potential approaches to calibration and validation, taking into consideration the different heavy models used within the industry and relative materiality of business lines.
]]>The market for green bonds, and environmentally aligned investment solutions, is increasing. As of 2022, the market of green bonds exceeded USD 2 trillion in issuance, with India, for example, having issued its first-ever sovereign green bonds totally R80bn (c.USD1bn) in January 2023. This paper lays the foundation for future papers and summarises the initial stages of our analysis, where we try to replicate the S&P Green Bond Index (i.e. this is a time series problem) over a period using non-traditional techniques. The models we use include neural networks such as CNNs, LSTMs and GRUs. We extend our analysis and use an open-source decision tree model called XGBoost. For the purposes of this paper, we use 1 day’s prior index information to predict today’s value and repeat this over a period of time. We ignore for example stationarity considerations and extending the input window/output horizon in our analysis, as these will be discussed in future papers. The paper explains the methodology used in our analysis, gives details of general underlying background information to the architecture models (CNNs, LSTMs, GRUs and XGBoost), as well as background to regularisation techniques specifically L2 regularisation, loss curves and hyperparameter optimisation, in particular, the open-source library Optuna.
]]>Running off the £2 trillion of UK corporate sector defined benefit liabilities in an efficient and effective fashion is the biggest challenge facing the UK pensions industry. As more and more defined benefit pension schemes start maturing, the trustees running those schemes need to consider what their target end-state will be and the associated journey plan. However, too few trustee boards have well-articulated and robust plans. Determining the target end-state requires a grasp of various disciplines and an ability to work collaboratively with different professional advisers. This paper sets out issues trustees, employers and their advisers can consider when addressing whether their target end state should be low- dependency, buyout or transfer to a superfund. Member outcomes analysis is introduced as a central tool through which to differentiate alternative target end-states. A five-step methodology is set out for deriving an optimal target end-state for a scheme. Also considered are the specific factors impacting stressed schemes, which highlights the importance to trustee boards when considering their Plan B should their employer or scheme ever become stressed. The paper ends with specific recommendations for the actuarial profession and The Pensions Regulator to take forward.
]]>This paper demonstrates workflows to incorporate text data into actuarial classification and regression tasks. The main focus is on methods employing transformer-based models. A dataset of car accident descriptions with an average length of 400 words, available in English and German, and a dataset with short property insurance claims descriptions, are used to demonstrate these techniques. The case studies tackle challenges related to a multilingual setting and long input sequences. They also show ways to interpret model output and to assess and improve model performance, by fine-tuning the models to the domain of application or to a specific prediction task. Finally, the paper provides practical approaches to handle classification tasks in situations with no or only few labelled data. The results achieved by using the language-understanding skills of off-the-shelf natural language processing (NLP) models with only minimal pre-processing and fine-tuning clearly demonstrate the power of transfer learning for practical applications.
]]>