Using transaction data to measure op risk

Since 1999, Peter Hughes has led a team of op risk specialists that has studied transaction processing environments in global banking organisations. The focus was on examining and understanding the causes of operational failure. The information and benchmark data gathered was used to develop a method that provides consistent and comparable measures of exposure and probability of failure in transaction processing environments. Hughes outlines the method that resulted from these studies

After

more than five years of research and consultation, the Basel Committee on Banking Supervision has formally issued the new capital Accord, Basel II. Its aim is twofold. First, to have the financial services industry determine its capital requirement through the application of more sophisticated internal credit risk models. Second, to achieve a similar degree of sophistication with operational risk.

The challenge on the operational risk side has turned out to be vastly more complex and elusive than perhaps originally envisaged and it is not difficult to see why. Credit risk models have basic elements available to them that cannot be easily replicated for operational risk. The most obvious are a concept of exposure and industry wide standards for assessing and rating probability of default.

In credit risk, a bank creates exposure by approving and booking credits to obligors. A bank knows how much it has lent and to whom. Credit analysts apply highly developed assessment techniques to determine each obligor's probability of default and a standardised credit risk rating is assigned. For less complex credits, there are simpler assessment techniques such as credit scoring. Credit failures, when they occur, can be referenced to an obligor whose total exposure and risk rating are known.

Operational risk is not so straightforward. For example, if a bank constructs a payments processing environment costing $10 million and processes payments totalling, on average, $1 trillion a day, what is its exposure if it has never made a wrong payment in the past five years? It cannot be zero as there is, evidently, some operational risk. It is probably more than the cost of constructing the operating environment, as it is the value of unprocessed or wrongly processed payments that will drive exposure and, in all probability, these will be greater than $10 million. It is unlikely to be as much as $1 trillion as the probability of losing all the payments made on one day is unthinkable. So the operational risk exposure must be somewhere between zero and $1 trillion but can't be greater than the bank's equity. This is not much for the risk modeller to go on.

Unlike credit risk, it is not currently industry practice to reference operational risk loss events to the operational processes that created them and attach a relevant measure of exposure and a standardised rating of probability of failure. Indeed, the capturing of operational risk loss events in today's environment is largely random.

This gives rise to the question that has bedevilled operational risk modellers since June 1999. Can a future capital requirement be calculated where there is no concept of exposure and no standardised rating of probability of operational failure? If the risk modeller cannot analyse the relationship between historic loss events and consistent measures of exposure and probability of failure it could be concluded that there is no basis on which to create a meaningful risk model. This leaves a bank with two options: abandon sophisticated risk models in favour of a basic or standardised approach or create the missing elements. The latter requires more research and investment but may not be such a complicated or costly exercise.

In 1999, the operations chief of a major global bank sponsored the development of an approach to measuring exposure in operations that incorporated standardised ratings of the probability of operational failure. This was the forerunner of the method described below, which, since its first prototype in 1999, has been successfully applied to financial operating environments throughout the globe. It is both simple to apply and highly effective.

The basic premise that underpins the method is that transactions drive operational risk. Banks construct operating environments comprised of people, technology, premises, processes and controls whereby the probability of operational failure increases as the volume and relative complexity of transaction throughput increase. This holds true for all the operational risk event types set out in the Basel Committee's July 2002 Sound Practices paper, that is, internal fraud; external fraud; employment practices and workplace safety; clients, products and business practices; damage to physical assets; business disruption and system failures; and execution, delivery and process management. The exposure to any of these risk event types increases as transaction volume and complexity increase.

A financial operating environment supports transaction processing cycles encompassing transaction origination and data capture through to settlement and reporting. Transaction cycles are comprised of discrete operational processes. Consequently, the first step in the method is to prepare a hierarchical mapping of all operational processes. Table A illustrates a typical process hierarchy.

The

prime element in credit risk is a credit. The prime element in operational risk is an operational process. An operational process has two constant characteristics:

• A primary processing objective or operational activity, for example, payment or settlement, reconciliation, trade confirmation, physical custody, cash management, reporting etc; and

• Volumes, that is, the values that pass through the process.

These two constants can be tabulated to provide a uniform basis of quantitative measurement.

In the five years of applying the method, it has been proven that any process can be mapped to one of 34 pre-defined financial processing objectives or activities. Table B illustrates three of these (see payments and settlements). A risk weighting on a scale of one to 10 is assigned to each of the 34 processing objectives consistent with the assessment of relative risk. For example, if the process's primary activity is to make payments exclusively to guaranteed counterparties, it is low risk (score two) as there is only a compensation risk. If unsecured payments are made to other parties, it is high risk (score 10) as there is a risk of loss of principal.

Financial processes carry values. If a process fails, it is the transaction volume and values that drive the amount of loss that can be potentially incurred. The method's creators examined how risk profiles are affected by changes in transaction volumes. The conclusion was that as volumes increase, the rate of relative change in risk decreases. In other words, the risk of financial loss undergoes a greater rate of change in a transaction value band from $100,000 to $1 million than in a band that goes from $10 billion to $100 billion. With this information, a table was constructed that assigns transaction volumes to predetermined value bands and a standardised risk weighting to each band. The method uses nine volume bands, as set out in table C.

The model uses process activity and volume data to quantify absolute operational risk concentration expressed in a currency unique to the method referred to as operational risk units (ORUs).

The method then evaluates the effectiveness of the risk countermeasures incorporated into each process. Since the method was first piloted in 1999, its creators have studied causes of operational failure and tabulated them into nine risk categories and 30 sub-categories, as illustrated in table D (see overleaf).

Benchmark data was also gathered relative to each of the 30 sub-categories, from which standardised scoring templates were constructed. Table E (overleaf) provides a sample template relative to three sub-categories within the 'people' category, that is, overtime, temporary and new staff, and cover. The scoring templates are structured such that excellent risk countermeasures score 100 and wholly ineffective risk countermeasures score zero. The process's staff and managers use the nine scoring templates to score the effectiveness of their own risk countermeasures. This is achieved either through self-assessment or in scoring sessions facilitated by experienced operational risk assessors.

Scores are attributed by identifying the risk countermeasure description and benchmarks that best match the current status or conditions of the process relative to each of the 30 sub-categories. Scoring is performed in two passes, 'current' and 'optimal', whereby the optimal score represents the condition that process staff and managers believe can be achieved within an agreed time frame.

In stress conditions, it is assumed that the degree of reliance placed on each of the nine risk categories in the prevention of operational failure is different. For example, the existence of excellent people and execution is of greater value when managing through stress conditions than the existence of excellent policies and procedures and risk culture. The method recognises this differentiation by assigning reliance factors and category weightings to each of the nine risk categories.

As can be seen in table D, scoring templates have multiple sub-categories within the same category. For example, 'people' conditions are matched against six sub-categories: overtime, head count, temporary and new staff, experience, cover and recruitment. The templates are structured in such a way that if there are different scores for each sub-category within a single category, only the lowest score is used. Assume that the following statements both apply to a process:

• The average level of experience is at 100% (score is 100); and

• The average level of overtime per person, per month, over the past three months was 60 hours (score is 25).

The lowest score (overtime) displaces the higher score (experience), following the principle that even though staff are fully experienced, risk is not mitigated due to excessive overtime.

From

the above data, the method calculates four primary measures at the process (lowest) level:

• Risk concentration (ORUs)

Absolute ORUs derived from process values and relative risk.

• Risk mitigation indicator (RMI) – current and optimal

The effectiveness of risk countermeasures on a scale of zero to 100 whereby 100 represents best practice.

• Value at risk (ORUs) – current and optimal

The risk concentration blended with the current and optimal RMI.

• Exposure (ORUs and RMI)

The gap between current and optimal measures.

The above measures can be aggregated at each of the five levels in the process hierarchy set out in table A. At each level, whereas ORUs are aggregated amounts, the RMI is recalculated using an algorithm that factors in quantitative (ORU) values and, consequently, is a risk-weighted measure.

As discussed above, scoring is performed in two passes, current and optimal. During the scoring sessions, the actions that are required to close the gap between the two scores are recorded. The method calculates the ORUs that are attributable to each action. This information is used to construct a risk reduction plan that takes the current score for each process and lists the actions in order of priority with the applicable RMI and ORU, and maps these to the optimal scores.

Based on the experience of applying the method, a table of RMI ranges has been developed that provides guidance on the interpretations that can be placed on the results and how management should respond. This is shown in table F.

In

its five years of global application, the method's tables and weightings have been continually refined and honed to produce highly representative and consistent measures of operational exposure (ORUs) and probability of failure (RMIs). As they are derived from the application of standardised tables, these measures are directly comparable between production teams in the same department, departments in the same firm and between firms.

On their own, these measures enhance the effectiveness of operational risk management in financial transaction processing environments as they measure, in quantitative and qualitative terms, the gaps between actual processing environments and best practice. The method does this in a way that managers and operations managers can fully relate to, as the method measures risks in the same way that they are managed.

Does this method help the risk modeller? By linking operational loss events to their respective transaction processes and attaching the applicable standardised measures of exposure and probability of failure, elements are provided that will lead to the creation of more meaningful risk models. It connects the real and statistical worlds of operational risk management in a rational way and allows the registering of loss events across multiple firms within a common framework of operational activity, exposure and probability of failure. This, in turn, creates greater opportunity for smaller and specialised firms to adopt advanced measurement approaches as ORUs and RMIs provide a common framework through which they can connect with third-party loss event databases and models.

ORUs can also provide the basis on which internal allocations of economic capital can be based. This same allocation basis could also be extended to address the determination of the regulatory capital requirement of internationally active subsidiaries of banking groups that have adopted an advanced measurement approach under Basel II.

Peter Hughes has occupied senior banking positions in audit, operations, finance and risk management. For more than three years, he represented Chase Manhattan on the British Bankers' Association's Operational Risk Advisory Panel. He has now formed his own company, OpRAss, which specialises in operational risk management solutions.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Financial crime and compliance50 2024

The detailed analysis for the Financial crime and compliance50 considers firms’ technological advances and strategic direction to provide a complete view of how market leaders are driving transformation in this sector

Investment banks: the future of risk control

This Risk.net survey report explores the current state of risk controls in investment banks, the challenges of effective engagement across the three lines of defence, and the opportunity to develop a more dynamic approach to first-line risk control

Op risk outlook 2022: the legal perspective

Christoph Kurth, partner of the global financial institutions leadership team at Baker McKenzie, discusses the key themes emerging from Risk.net’s Top 10 op risks 2022 survey and how financial firms can better manage and mitigate the impact of…

Emerging trends in op risk

Karen Man, partner and member of the global financial institutions leadership team at Baker McKenzie, discusses emerging op risks in the wake of the Covid‑19 pandemic, a rise in cyber attacks, concerns around conduct and culture, and the complexities of…

Moving targets: the new rules of conduct risk

How are capital markets firms adapting their approaches to monitoring and managing conduct risk following the Covid‑19 pandemic? In a Risk.net webinar in association with NICE Actimize, the panel discusses changing regulatory requirements, the essentials…

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here