Journal of Operational Risk
ISSN:
1744-6740 (print)
1755-2710 (online)
Editor-in-chief: Marcelo Cruz
Volume 9, Number 3 (September 2014)
Editor's Letter
Welcome to the third issue of the ninth volume of The Journal of Operational Risk.
It is encouraging to see operational risk data programs starting to show some progress and maturity. In conversations I have had with people in the industry, I have learned that many global firms claim to nowhave hundreds of thousands of operational loss events in their databases. It is not hard to imagine these large databases when we consider that large banks process millions of transactions every day. Even if these transactions come with a very low fail ratio, we would still see tens of thousands of small losses flowing into a loss database on a daily basis. More evidence of the progress made by banks in operational loss collection can be seen if we look at the primary loss data consortia, who already have hundreds of thousands of loss events reported by their members. In addition to international consortia, various banking associations in a number of countries have created their own data consortia under the assumption that loss data from firms in the same region is likely to have a better cultural fit and can therefore be processed better by operational risk models. Several of these national databases have recently developed into very robust ones, which is another reflection of the improvement in loss collection by banks.
As evidence of this progress and the benefit it brings to the industry as a whole, we have two papers that use national databases in this issue: one from China and one from Austria. It is difficult, if not impossible, to do science without any data or evidence. The emergence of these large databases is the basis for better operational risk analysis, and the industry is certainly ready to consume such analyses. I ask authors who would like to write regarding the state of operational risk research (particularly bearing in mind the amount of data available) to continue to submit to the journal. Again, I would like to emphasize that the journal is not solely for academic authors. Please note that we do publish papers that do not have a quantitative focus, and indeed there are examples of this in the current issue. We at The Journal of Operational Risk would be happy to see more submissions containing practical, current views on relevant matters as well as papers focusing on the technical aspect of operational risk.
RESEARCH PAPERS
There are two research papers in this issue. In the first, "The mutual-information-based variance-covariance approach: an application to operational risk aggregation in Chinese banking", Jianping Li, Xiaoqian Zhu, Yongjia Xie, Jianming Chen, Lijun Gao, Jichuang Feng and Wujiang Shi argue that it is difficult to assess the dependency between operational risk event types using traditional methods like copulas. Using an operational loss database from Chinese banks, the authors propose a mutual-information-based variance-covariance approach that they claim to be capable of assessing the overall correlation while simultaneously being highly tractable. The authors replace the traditional linear correlation coefficient with a global correlation coefficient within the framework of a variance-covariance approach. Starting from a discussion of the theory of mutual information, the authors claim that the global correlation coefficient is able to capture both linear and nonlinear correlation relationships. The values-at-risk (VaRs) of individual risk components are calculated and then aggregated using the global correlation coefficient. In their empirical analysis, the authors' proposed approach is used to aggregate Chinese banking operational risk across business lines based on an operational risk losses data set from Chinese banks. After an overall comparison with results from other correlation assumptions and the actual capital allocation of Chinese banking in 2013, they conclude that operational risk capital allocation in China is not effective using traditional methodologies and that the aggregateVaR calculated from their approach would give better results.
In our second research paper, "Goodness-of-fit tests and selection methods for operational risk", Sophie Lavaud and Vincent Lehérissé work on the always-very-popular subject of loss distribution selection. The authors suggest that a modeler should consider selection methods that contemplate the use of the relative performance of the distributions at different confidence levels. Their work investigates selection methods such as the Bayesian information criterion and the violation ratio as alternatives to traditional goodness-of-fit tests.
FORUM PAPERS
In this section we publish papers that report readers' experiences in their day-to-day work in operational risk management.We have two forum papers in the current issue. In the first paper, "Factor reduction and clustering for operational risk in software development", Faizul Azli Mohd-Rahim, Chen Wang, Halim Boussabaine, Hamzah Abdul-Rahman and Lincoln C.Wood focus on the operational risks surrounding software development. In their view, software development failures frequently emerge as a result of the failure of developers to identify and manage the risks surrounding production. This paper's aim is to identify and map the key risk factors during a software development project life cycle, in terms of occurrence likelihood and impacts on cost overrun. The authors submitted a questionnaire to 2000 software development companies, IT consultancy and management companies, and web development companies in the United Kingdom, the United States, Europe, India, China, Japan, Canada, Australia and other Asian countries. The survey asked respondents to assess a number of risk factors. However, bearing in mind that many risk factors during the production life cycle are closely related to each other, the authors also applied a factor reduction and a clustering process to allow a smaller number of key risk factors to be identified. The three main clusters of risk factors identified in the study were "feasibility study", "project team management" and "technology requirements". The study's results should allow software developers to understand the probability of occurrence of errors and their overall impact, thereby improving software development project risk management.
In the issue's second forum paper, "Evidence, estimates and extreme values from Austria", Stefan Kerbl analyzes an Austrian database of operational loss event containing more than 42 000 observations. The author provides statistical analyses by event type and business line using the historical time series. He subsequently makes use of this extensive database to try and address a central question of operational risk research: which loss distribution is best suited to model the severity distribution of a single loss? The author analyzes three candidates:
(1) the generalized Pareto distribution,
(2) the g-and-h distribution and
(3) the recently proposed semiparametric approach based on the modified Champernowne transformation.
Cross-validation is used to evaluate the performance of each approach. Kerbl's study finds that the generalized Pareto distribution provides a very good fit in the crossvalidation exercise.
Marcelo Cruz
Papers in this issue
Evidence, estimates and extreme values from Austria
Goodness-of-fit tests and selection methods for operational risk
The mutual-information-based variance–covariance approach: an application to operational risk aggregation in Chinese banking
Factor reduction and clustering for operational risk in software development