Journal of Risk Model Validation

Steve Satchell
Trinity College, University of Cambridge


We live in difficult times, and these difficulties have extended to the publishing and editing of journals. To the authors who have experienced delays in our handling of their submissions, I ask for tolerance and patience; we have had a number of administrative issues that we are currently addressing, and we hope to get back to normal as soon as possible. I am, however, pleased to be able to deliver an editorial for the four fine papers that we include in this issue of The Journal of Risk Model Validation.

Our first paper, by Marcin Fałdzinski and Magdalena Osinska, is “The use of range-based volatility estimators in testing for Granger causality in risk on international capital markets”. Here, the authors employ extreme value theory (EVT) to compare the performance of a wide variety of range-based volatility estimators in the analysis of causality in risk between emerging and developed markets. For those familiar with the mysteries of volatility modeling, the AR.1/-GARCH.1; 1/ model with t -distributed errors is used as a benchmark. Regulator and firm loss functions are used to select the best volatility model. Two tests of causality in risk are used in the authors’ empirical study. They identify the model that best captures large risks. They also identify the markets that are the most “risk-taking”: these include the Standard & Poor’s 500, the CAC 40, the Nikkei 225, the Nasdaq and the FTSE 100.


“A k-means++-improved radial basis function neural network model for corporate financial crisis early warning: an empirical model validation for Chinese listed companies”, the issue’s second paper, is by Danyang Lv, Chong Wu and Linxiao Dong. The authors look at early warning systems, a topic that most of us have worked on at some point in time, so we know the inherent challenges involved. The paper aims to simplify the early warning model for financial crises by collecting and analyzing the financial data of Chinese special treatment companies, normally listed companies and cancel special treatment companies. To further predict the financial risks of companies, the authors put forward a prediction model based on the k-means++ algorithm and an improved radial basis function neural network (RBF NN), and they compare their respective statistics. They indicate via experiments that combining k-means++ with the improved RBF NN helps to better predict financial risks for companies, which is effective in the risk control of financial management.


Our third paper is “Benchmarking loss given default discount rates”, in which Harald Scheule and Stephan Jortzik provide a theoretical and empirical analysis of alternative discount rate data. The paper benchmarks five discount rate concepts for workout recovery cashflows in order to derive observed losses given default (LGDs) in terms of economic robustness and empirical implications: contract rate at origination, loan-weighted average cost of capital, return on equity (ROE), market return on defaulted debt and market equilibrium return. The paper develops guiding principles for LGD discount rates and argues that the weighted average cost of capital and market equilibrium return dominate the popular contract rate method. The empirical analysis of data provided by Global Credit Data shows that declining risk-free rates are in part offset by increasing market risk premiums. Common empirical discount rates lie between the risk-free rate and the ROE. The variation in empirical LGDs is moderate for the various discount rate approaches.


The final paper in this issue, “A FAVAR modeling approach to credit risk stress testing and its application to the Hong Kong banking industry”, is by Zhifeng Wang and Fangying Wei. The authors note that the Basel Committee on Banking Supervision (BCBS) published its stress testing principles in October 2018, and that one of the key principles is about stress testing model validation aided by business interpretation, benchmark comparison and backtesting. They go on to present a credit risk stress testing model based on the factor-augmented vector autoregressive (FAVAR) approach, which projects credit risk loss under stressed scenarios. I have to say that
this was a new acronym, at least for me. It stems from both factor analysis (FA) and vector autoregressive (VAR) modeling.


The FAVAR approach ensures that the proposed model has many appealing features. First, a large number of model input variables can be reduced to a handful of latent common factors to avoid the curse of dimensionality. Second, the dynamic interrelationship among macroeconomic variables and credit risk loss measures can be studied without exogeneity assumptions. Moreover, the application of impulse response function techniques facilitates the multiperiod projection of credit risk loss in response to macroeconomic shocks. All of these features make the proposed modeling framework a potentially handy solution to fulfilling the BCBS requirement of quantitative adequacy assessment of banks’ internal stress testing results with a benchmark model. The scope of its application can also extend to impairment modeling for International Financial Reporting Standard 9, which requires the projection of credit risk losses over consecutive periods under different macroeconomic scenarios.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here