Journal of Risk Model Validation
ISSN:
1753-9579 (print)
1753-9587 (online)
Editor-in-chief: Steve Satchell
Volume 13, Number 3 (September 2019)
Editor's Letter
Welcome to the third issue of the thirteenth volume of The Journal of Risk Model Validation.
I have to express special appreciation for this issue’s first paper, “Model risk management: from epistemology to corporate governance”, as, in a sense, it engages with the philosophy of risk and aspires to inject some rigor into areas of finance that should be rigorous but remain persistently lax. The tension here is between intellectual precision and implementation, with many mathematical structures only partially capturing what practitioners do. In the paper, Bertrand K. Hassani argues that one of the most significant risks institutions face is model risk. Further, he makes the case that this term has been being used in a rather loose way, stating that there is a need for clarification and precise definition. Hassani proposes conducting an analysis of model risk both to understand the main issues that lead to failures and to find the best way to address them.
Our second paper is “Nonparametric tests for jump detection via false discovery rate control: a Monte Carlo study” by Kaiqiao Li, Kan He, Lizhou Nie, Wei Zhu and Pei Fen Kuan. In it, the authors observe that nonparametric tests are popular and efficient methods for detecting jumps in high-frequency financial data. Each method has its own advantages and disadvantages, and their tests’ performances may be affected by underlying noise and dynamic structures. To address this, the authors propose a robust p-value pooling method that aims to combine the advantages of each method. They focus on model validation within a Monte Carlo framework to assess reproducibility and the false discovery rate (FDR). Extensive simulation studies of high-frequency trading data at a minute level are carried out and the operating characteristics of these methods are compared via the FDR framework. The authors show that their proposed method is robust across all scenarios under both reproducibility and FDR analysis. Finally, they apply their method to minute-level data from the limit order book system: the efficient reconstruction system, which boasts the most wonderful acronym, LOBSTER. Readers can test these claims for themselves, as the R package JumpTest for implementing these methods has been made available on the Comprehensive R Archive Network.
The third paper in the issue, “Risk data validation under BCBS 239” by Lukasz Prorokowski, looks at problems surrounding risk model validation and data aggregation. Those of us with economics training know that nothing aggregates except under the strongest assumptions of homogeneity, so there is a real issue here that needs to be addressed. With banks facing increased pressure from regulators to improve their risk data aggregation processes, BCBS 239 is a relevant statement on this topic from the Basel Committee. To quote the author: “At this point, BCBS 239 – the Basel Committee on Banking Supervision’s “Principles for effective risk data aggregation and risk reporting” – should not be treated as a set of regulations for data aggregation, but as a collection of ongoing practices for achieving the best risk management experience.” Prorokowski conducts a survey of twenty-nine global banks, looking into how risk data is provided and validated.
Finally, the issue’s last paper is “An advanced hybrid classification technique for credit risk evaluation” by Chong Wu, Dekun Gao, Qianqun Ma, Qi Wang and Yu Lu. Due to the important role that credit risk evaluation (CRE) plays in banks, the goal of this paper is to improve the accuracy of the CRE model. The authors employ a hybrid approach to design a practical and effective CRE model based on what is called a deep belief network and the K-means method. This procedure is then applied to German and Australian credit data sets. The results show that the hybrid classifier the authors propose is effective in calculating CRE. In fact, they argue that it performs significantly better than classical CRE models.
These four papers, taken together, represent a good body of new scholarship on risk model validation with many potential applications for practitioners. I am very pleased with the quality and relevance of this content.
Steve Satchell
Trinity College, University of Cambridge
Papers in this issue
Model risk management: from epistemology to corporate governance
In this paper, the authors conduct an analysis of model risk in an attempt to understand the main issues that lead to failures and the best way to address such issues.
Nonparametric tests for jump detection via false discovery rate control: a Monte Carlo study
The main goal of this paper is to perform a comprehensive nonparametric jump detection model comparison and validation. To this end, the authors design an extensive Monte Carlo study to compare and validate these tests.
Risk data validation under BCBS 239
Based on a survey of twenty-nine major financial institutions, this paper aims to advise banks and other financial services firms on what is needed to get ready for and become compliant with BCBS 239, especially in the area of risk data validation.
An advanced hybrid classification technique for credit risk evaluation
In this paper, the authors employ a hybrid approach to design a practical and effective CRE model based on a deep belief network (DBN) and the K-means method.