Journal of Risk Model Validation

Steve Satchell
Trinity College, University of Cambridge

Machine learning has recently come to dominate the world of research and it is therefore not a shock to find that two of the four papers in this issue of The Journal of Risk Model Validation are about machine learning challenges in risk validation.

The issue’s first paper is “The impact of deterioration in rating-model discriminatory power on expected losses” by Siyi Zhou and Gary van Vuuren. The authors discuss methodologies for estimating the impact on a portfolio’s expected credit loss caused by potential model risks in underwriting, such as those arising from scoring models (for retail banking) and rating models (for wholesale banking). They focus on a particular metric: namely, the impact on expected credit losses when the Gini coefficient weakens. Their approach is analytical, and under fairly simple assumptions they are able to derive tractable and computable results, albeit at the cost of granularity.

The second paper in the issue, “Forecasting India’s foreign trade dynamics: evaluation of alternative forecasting models in the post-pandemic period”, addresses concerns that the journal has not previously looked at. However, the evaluation of a set of competing models falls under the broad subject of model validation, and the models in question are to do with forecasting foreign trade growth in India. A. Mansurali, Sarbjit Singh Oberoi, P. Mary Jeyanthi and Sayan Banerjee look at various forecasting models and identify the best model for forecasting India’s trade value. I would add that the notion of a best model is dependent upon the utility function of the model evaluator, and two sensible people with different utilities could come to different conclusions as a consequence.

“Analyzing credit risk model problems through natural language processing-based clustering and machine learning: insights from validation reports” by Szymon Lis, Mariusz Kubkowski, Olimpia Borkowska, Dobromił Serwa and Jarosław Kurpanik, the third paper in the issue, is a most welcome collision between validation analysis and machine learning, and it may well be a precursor to a much larger future literature on this topic.

Our final paper is also about machine learning. In “Machine learning prediction of loss given default in government-sponsored enterprise residential mortgages”, Zilong Liu and Hongyan Liang evaluate the predictive capabilities of a set of machine learning models for loss given default. Interestingly, the authors found that loanspecific details and broader economic indicators were both relevant in loss given default estimation, and that different algorithms worked better for different sectors; fintech and banking showed different patterns of loss given default, requiring different approaches. This might not seem surprising to an economist, but it is nevertheless pleasing to find these features embedded in the data.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here