Journal of Risk Model Validation

Steve Satchell
Trinity College, University of Cambridge

This issue of The Journal of Risk Model Validation is a little more “high-tech” than most, reflecting developments in research in computing technology. The reader who is bewildered by data science should bear in mind that the issue focuses mainly on statistics, but there is also an additional emphasis on the design of algorithms and models. This point will become clear when you look at the issue’s four papers.

The first paper in the issue, by Sergio Caprioli, Emanuele Cagliero and Riccardo Crupi, is titled “Quantifying credit portfolio sensitivity to asset correlations with interpretable generative neural networks”. The authors propose an approach for the quantification of credit portfolio value-at-risk sensitivity to asset correlations with the use of synthetic financial correlation matrixes generated with deep learning models. Previous work has seen Gautier Mori employ generative adversarial networks to demonstrate the generation of plausible correlation matrixes that capture the essential characteristics observed in empirical correlation matrixes estimated on asset returns. Here, instead of generative adversarial networks, the authors employ variational autoencoders (VAEs) to obtain both a more interpretable latent space representation and a generator of plausible correlation matrixes by sampling the VAE’s latent space. The latter, they claim, can be a useful tool for capturing the crucial factors impacting portfolio diversification, particularly in relation to credit portfolio sensitivity to changes in asset correlations. They used a VAE trained on the historical time series of correlation matrixes to generate synthetic correlation matrixes satisfying a set of expected financial properties. Their analysis indicates that the capabilities of realistic data-augmentation provided by VAE, combined with the ability to obtain model interpretability, could prove useful for risk management purposes, enhancing the resilience and accuracy of models when backtesting, as past data may exhibit biases and might not contain the high-stress events that are essential for evaluating diverse risk scenarios.

In the issue’s second paper, “Financial distress prediction with optimal decision trees based on the optimal sampling probability”, Guotai Chi, Cun Li, Ying Zhou and Taotao Li develop a tree-based ensemble model for financial distress prediction. They obtain multiple balanced samples with different sampling probabilities. The optimal sampling probability is determined by the maximum geometric-mean values, and the optimal decision tree models are constructed based on optimal balanced samples with this optimal sampling probability. The model validation is based on a sample of Chinese listed companies. The authors also validate the effectiveness of the model in different time windows. Their empirical results show that the financial distress prediction performance of their proposed model outperforms the comparison models for different time windows.

Our third paper, “Default prediction based on a locally weighted dynamic ensemble model for imbalanced data”, is by Jin Xing, Guotai Chi and Ancheng Pan. To avoid bias in model default predictions due to differences in the numbers of defaulting and non-defaulting firms, this study proposes a locally weighted dynamic ensemble model. To construct more diverse base classifiers, ten imbalanced data sampling methods and five heterogeneous classifiers are introduced to balanced bagging to select the base classifiers with the highest accuracy under different data distributions. To reduce overfitting and information loss, the authors use the locally weighted dynamic ensemble method to obtain their final prediction result. Experiments on three publicly available data sets and a data set of Chinese listed firms confirm that the predictive performance of the proposed ensemble model outperforms the other 15 models considered. Moreover, Xing et al claim that the proposed ensemble model can predict financial institutions’ default status five years ahead.

In the issue’s fourth and final paper, Guanghui Han, Panpan Liu, Yueqiang Zhang and Xiaobo Li carry out “A study of China’s financial market risks in the context of Covid-19, based on a rolling generalized autoregressive score model using the asymmetric Laplace distribution”. To fully capture the degree of risk in the financial market, they use the asymmetric Laplacian distribution (ALD) to describe the distribution characteristics of financial rates of return, and they then establish the generalized autoregressive score (GAS) model and introduce the time-varying parameter rolling estimation risk measures value-at-risk and expected shortfall to build a dynamic risk measurement model to investigate financial market returns during the Covid-19 pandemic (the asymmetric Laplacian distribution is a good choice as it is able to capture many features of the data while remaining reasonably analytically tractable). The authors selected a data set based on the Shanghai Stock Exchange for their empirical analysis, and parametric and non-parametric methods were used for comparison and backtesting. The findings show that value-at-risk and expected shortfall based on the GAS-ALD model do well in predicting volatility risk in the Chinese markets studied, and also that expected shortfall can predict the loss risk of stock returns more accurately than value-at-risk in extreme cases. In addition, the authors find that the pandemic had a large impact on the raw materials and energy industries but a smaller impact on the financial industry. They also make some policy recommendations that may be of particular interest to government administrators, regulators and enterprises.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here