Journal of Risk Model Validation
ISSN:
1753-9579 (print)
1753-9587 (online)
Editor-in-chief: Steve Satchell
Volume 16, Number 2 (June 2022)
Editor's Letter
Steve Satchell
Trinity College, University of Cambridge
If you have been to, or will be attending, a conference in 2022, it is very unlikely that you will not hear a presentation based on environmental, social and governance (ESG) concerns or on applications of artificial intelligence (AI) or machine learning (ML). This issue of The Journal of Risk Model Validation reflects this trend, with two of our four papers being in these broad areas.
Our first paper, “Can we take the ‘stress’ out of stress testing? Applications of generalized structural equation modeling to consumer finance” by José J. Canals-Cerdá, advocates a global approach to risk model validation. The author proposes an empirical framework based on recent developments in the implementation of generalized structural equation modeling (GSEM), which brings to bear a modular and all-inclusive approach to statistical model building. For those unfamiliar with this approach, it is a “syntax” of modeling instructions that allows the user to build highly sophisticated models using simple instructions. Canals-Cerdá goes on to apply the approach to an example of credit risk for a mortgage book as well as numerous other examples. Indeed, he illustrates “how GSEM techniques can significantly enhance every step of the modeling management life cycle, from development and documentation to validation, production and redevelopment”. This paper should be of great interest to many of our readers.
The issue’s second paper, “An end-to-end deep learning approach to credit scoring using CNN C XGBoost on transaction data” by Lars Ole Hjelkrem, Petter Eilif de Lange and Erik Nesset, examines various approaches to credit scoring models developed on financial behavioral data about potential customers from a Norwegian bank. The data in question are sourced from open banking application programming interfaces (APIs), which could provide information about potential clients based on historical data on their balances and transactions. Hjelkrem et al find that traditional regression models perform poorly, while ML methods can provide models with satisfactory performance based on these data alone. Further, they find that the bestperforming models are based on an end-to-end deep learning approach (a technique where the model learns all the steps between the initial input phase and the final output result, and where all of the different parts are trained simultaneously instead of sequentially) in which the ML algorithms create the explanatory variables based on nonaggregated data. For those wary of AI methods, this paper is a digestible application of what might seem mysterious techniques and dark arts.
Sanja Doncic, Nemanja Pantić, Marija Lakićević and Nikola Radivojević use neural nets for risk estimation in the third paper in this issue: “Expected shortfall model based on a neural network”. Their twist is to combine this approach with extreme value theory, which is a theory about how extremums of samples behave as the sample size becomes large. While this paper may sound rather technical, the application concerns 15 indexes of emerging European capital markets. Model validation in the context of Basel III standards was done using Berkowitz’s expected shortfall backtesting based on bootstrap simulation and Acerbi and Szekely’s first method. I am pleased by the paper’s focus on model validation and by its application to emerging markets, as the little we know about the statistical properties of emerging markets is that they are highly nonnormal and unlikely to have return processes that are describable by simple linear models. For these reasons, the use of such techniques seems warranted.
The issue’s last paper, “General bounds on the area under the receiver operating characteristic curve and other performance measures when only a single sensitivity and specificity point is known” by Roger M. Stein, is a more traditional piece of theory to do with model validation. Receiver operating characteristic (ROC) curves are often used to quantify the performance of predictive models used in diagnosis, risk stratification and rating systems. The ROC area under the curve summarizes the ROC in a single statistic, and Stein provides an interesting analysis, together with developments in the theory of this and other aspects of model validation along with applications. Although this paper might seem more academic than our standard fare, we are pleased to include it as its breadth of discussion and clarity make it worthy of being read by a wide audience.
Papers in this issue
Can we take the “stress” out of stress testing? Applications of generalized structural equation modeling to consumer finance
This paper provides a practical introduction to the GSEM statistical framework in risk management, and it illustrates the game-changing potential of this methodology with two empirical applications.
An end-to-end deep learning approach to credit scoring using CNN + XGBoost on transaction data
The authors find that machine learning methods can generate satisfactorily performing credit score models based on data from the 90-days prior to the score date, where traditional models can perform poorly.
Expected shortfall model based on a neural network
This paper presents a model that combines ES models based on EVT and neural networks and meets all criteria for the validity of the Basel III standard.
General bounds on the area under the receiver operating characteristic curve and other performance measures when only a single sensitivity and specificity point is known
Using a single true positive - true negative pair, the author shows how to calculate the area under a ROC curve.