Is there hope in the advanced measurement approaches?

Basle II is mistaken in assuming a stable relationship between expected and unexpected losses, argues Jacques Pézier in his second article on the Basle Committee’s recent operational risk working paper

The carrot of lower operational risk capital charges under the proposed Basle II bank capital accord is promised only to those banks using an advanced measurement approach for calculating these charges.

Major banks may be allowed to reduce their overall op risk charge to a floor of 9% of the current minimum regulatory capital for credit and market risks. That compares with the floor of 12% of regulatory capital for banks using the simpler basic and standardised approaches to calculating the op risk charge.

In a previous article (see Operational Risk, November 2001, page 13), we said the basic and standardised approaches, both of which rely on gross income as an indicator of op risk within a bank, are unlikely to be risk-sensitive, properly calibrated or able to provide incentives for better op risk management.

But is there any hope in the advanced measurement approaches? These were outlined in ‘CP2½’, the working paper on op risk issued in September by the Basle Committee on Banking Supervision, the architect of the Basle II capital adequacy accord and the body that in effect regulates international banking (see box).

A laissez-faire approach

What an advanced approach should consist of is largely left to the banks in CP2½, so dubbed because it comes between the Committee’s second consultative paper (CP2) on Basle II -- issued in January this year -- and CP3, the third consultative paper expected in late February next year.

In CP2½, the Basle regulators have stepped back from imposing a single advanced approach, the internal measurement approach, as originally proposed in CP2. Rather, they now encourage the banking industry to "let 100 flowers bloom" -- in other words, they invite the industry to come up with their own ideas.

An essential feature of the advanced approaches is that they are rooted in the internal loss experience of each firm, with each company having to justify its models and parameters. This ‘bottom-up approach’ is necessary to calculate risk-sensitive, consistently calibrated op risk capital requirements.

But, as suggested in our previous article, some of the regulator’s suggestions and quantitative standards for the advanced approaches are unhelpful.

First, we believe there is no reason to define ‘unexpected losses’ as a function of ‘expected losses’, nor should we have to assume a stable relationship between these two measures. This only creates confusion between the two measures and over the role of capital charges. The precise mathematical meaning of the terms ‘expected’ and ‘unexpected’ is explained below.

Second, we argue that there is even less cause to assume a linear relationship between the two. Third, it is falsely safe, dangerous even, to add capital charges pertaining to each of the numerous and varied categories of operational losses.

Desirable features of an op risk measurement model

As shown in the previous article, the basic and standardised approaches are not truly risk-sensitive; they are calibrated ‘top-down’ to achieve no change, on average, to overall capital requirements following the introduction of Basle II. They are not based on a direct assessment of unexpected losses.

The hope is that, with the introduction of advanced measurement approaches, op risk capital charges would be risk-sensitive, the level of confidence required to cover unexpected losses would be similar to the levels required for other risks, and that the charges would be applied only to losses for which a capital buffer offers effective protection.

To these basic criteria one might add three other desirable qualities. Advanced measurement approach should yield trade-offs between risks and costs of mitigating actions to improve op risk management, they should not be too costly to implement compared to expected benefits and they should be well integrated with the measurement and management of other risks.

How do the proposed advanced approaches measure up to these criteria? We think that, despite showing much greater flexibility, CP2½ still makes several unhelpful suggestions. The main three are outlined below.

First unhelpful suggestion: To relate unexpected losses to expected losses

The main thrust of the Basle II accord is to make capital charges more risk-sensitive than under Basle I, but the implications are not always well understood.

The first implication is that capital charges should reflect unexpected losses only. In all logic, expected losses should already be taken into account elsewhere, either in a mark-to-market valuation or, under accrual accounting, as a provision.

Alas, that may not be the case. A simple example, taken from the much better known field of credit risk, illustrates the difficulties that remain after 13 years of experience and successive refinements in the calculation of credit risk capital charges.

Suppose a year ago a bank made a two-year loan to a new corporate client at Libor plus 50 basis points. Recently, the client issued a profit warning; its credit rating was downgraded from ‘A’ to ‘BBB’; its senior short-term debt now trades at Libor plus 200bp. The bank has received its first interest payment but the principal of the loan and the second interest payment due in one year are still at risk. What results are reported and what is the credit risk capital charge?

The loan will have been placed in the banking book where there is no requirement (and indeed strong obstacles) to marking to market. If the bank funds itself at Libor flat, the first interest payment will have been reported as a net profit contribution of 50 basis points, there will be no accounting entry for the next interest payment until received and, in all likelihood, no provision for bad debt as the corporate is still of investment grade. The credit risk capital charge will be (with a BBB corporate credit weight of 100% and a bank capital ratio of 8%) 8% of principal. Moreover, the reporting would be the same if the outstanding loan had more than one year to maturity.

What is the reality? The bank has indeed received a net profit contribution of 50 basis points on the principal amount but now faces two possibilities for the forthcoming year: receiving another 50 basis points net profit contribution if the client does not default, or receiving back only a fraction (the recovery rate) of principal plus interest if the client defaults.

The ‘expected’ result is the average value of these two possibilities weighted by their respective probabilities. It is a mathematical entity; it is not what is most likely to happen or even what may be anticipated to happen, indeed, in this case, it will never happen. The ‘unexpected’ result is a measure of variation between what may happen and the ‘expected’ result. Several measures have been used. A common measure often used by regulators is the unexpected loss at a certain confidence level such as 99.9%. That would be the difference between the expected result and the minimum result that has a probality of occurrence of at least 0.1%.

Unfortunately, the probability of default and the recovery rate are not known with certainty, making the calculation of both expected and unexpected losses difficult. They are in the realm of subjective judgments aided by experience and market information. For example, experience may indicate that recovery rates on such loans are of the order of 50%, and from the 200bp market price for senior debt with a similar recovery rate one could infer a risk-neutral probability of default over the next year of the order of 4%. We say ‘risk neutral’ because the market price of senior debt is likely to contain some risk premium and some profit element. From historical data on credit spreads and default frequencies one could infer a lower probability of default, perhaps only 3%.

Accordingly, an expected credit loss of about 100bp should be reported immediately for the next year (100bp loss is approximately the average between 50bp profit with a 97% probability of no default and 50% loss with a probability of default of 3%). The unexpected loss at a 99.9% confidence level -- in this case where default has more than 0.1% probability of occurrence -- is simply the unexpected loss in case of default, that is 50% -- 1%= 49%.

Note that expected credit losses should be summed up over all exposures but that, in reality, small expected credit losses, like 1%, are usually not reported under accrual accounting. Typically, provisions are made only for distressed loans. On the other hand, the 49% unexpected loss is much higher than the 8% credit capital requirement. However, unexpected losses are not additive for independent risks; if one were to consider a large portfolio of similar but largely unconnected exposures, as most banks have, the unexpected loss at a 99.9% confidence interval over one year would soon drop below 8%.

Note also that the expected credit loss varies proportionally with the time-to-maturity of the exposure as the default probability per unit of time remains stable. It would be easy to show, however, that the unexpected loss (on a sizable portfolio) would increase like the square root of time to maturity.

The 8% credit capital charge is therefore a fudge; it is beyond the 99.9% confidence level over a year for large portfolios but it may compensate for expected credit losses which are not taken into account under accrual accounting.

The situation is very similar for op risks. Perhaps we should not have hoped for the role of the op risk charges to be more clearly defined. One suspects the Basle regulators are aiming to introducing a capital charge that will cover both expected and unexpected losses, as they are aware that few banks make adequate provisions for expected operational risks.

Indeed, the making of such provisions may not be common practice under current accounting standards, and the tax authorities would probably object to specific provisions for low-probability loss events.

Defining op risk capital "based on measures of expected operational risk losses" (CP2½, page 33) lends support to the view that the capital charge is meant to act as a buffer against both expected and unexpected losses. If that is regrettably the case, it should be clearly stated and justified.

Second unhelpful suggestion: linearity between expected and unexpected losses

CP2½ says the internal measurement approach "assumes a stable relationship between expected losses ... and unexpected losses. This relationship may be linear ... or non-linear".

This is a step back from imposing proportionality constants -- the so-called gammas -- between expected and unexpected losses. But the working paper still seems to favour a linear relationship, as in the basic indicator and standardised approaches.

Representatives of the Committee’s risk management group say they have no empirical evidence that unexpected losses are not linearly related to expected losses. But neither do they have evidence that they are!

In fact, it is beyond belief that a model based on the occurrence of a large variety of loss events would have unexpected losses proportional to expected losses.

If the size of a business were to double in a year, or if two similar businesses of equal size were to merge, what is more likely for the new business: that the number of loss events remains stable and the severity of each loss event doubles on average, or that the number of loss events doubles on average and the severity of each loss event remains relatively stable? Most bankers would say the second case is the more likely.

In that case, a square root relationship between expected and unexpected losses would be closer to the mark. We hope unexpected losses will not have to be defined as a function of expected losses, but if they had to be, freedom to select a non-linear model should certainly be left to the industry.

Third unhelpful suggestion: adding capital charges

The Basle regulators would like to sum up the capital charges of all 56 combinations of business lines/event types. They argue that in the absence of empirical evidence to the contrary, it is prudent to assume that all op risks are perfectly dependent -- that is, they will happen all at once or not at all.

The regulators propose this rule for both the loss distribution approach and the internal measurement approach. But this path is not as safe as they think.

By not recognising diversification, CP2½ provides no incentive for firms to seek diversification. Rather, there is an incentive for firms to specialise in the activities in which they are most successful, even if those activities are narrowly defined. Doing so may lead to short-term efficiency gains, but financial institutions will be less secure as a result. Increased specialisation is one explanation for the increasing volatility of equity markets at the firm level relative to market indexes that has been observed over the last two decades.

Proposed quantitative standards, says CP2½ , would allow a bank to "... recognise empirical correlations ... provided it can demonstrate that its systems for measuring correlations are sound and implemented with integrity."

One could be forgiven for inferring that, if there is no empirical evidence of correlation, one should assume zero correlation. In fact, the committee intends the opposite! It adds: "In the absence of specific, valid correlation estimates, risk measures for different business lines and/or event types must be added for purposes of calculating the regulatory minimum capital requirement."

There is nothing wrong with empirical evidence, but with rare loss events, empirical evidence will always be weak or non-existent, even in industry-wide databases. Alternatively, there may be strong logical arguments for or against connecting loss events, and I believe most of the time they will point towards no connection.

Adding capital charges for the seven business lines recognised under the standardised approach was already a dubious exercise. It is even more questionable in advanced approaches with 56 business line/event-type cells.

Take a couple of examples. Should unexpected losses from, say, internal fraud in corporate finance, be related to loss of physical assets in asset management? Should the catastrophic loss of physical and human assets due to terrorism be related to similar losses due to earthquakes? Reason says ‘no’ in both cases, but by pure coincidence, such loss events may indeed happen almost simultaneously.

Perhaps confusion between expected and unexpected losses is to be blamed again. It would be true to say that at a company where operational risks are tightly controlled, one would expect losses to be smaller in all op risk categories than at a firm with poor controls. But that does not imply any dependency between unexpected losses of different types within the same firm, be it well or poorly managed.

If loss events across categories were largely independent, the appropriate aggregation rule would be to calculate the total op risk capital charge as the square root of the sum of the squares of the charge per business-line/event-type category.

If capital charges are calculated for each category at the 99.9% confidence level, then added together when reason indicates the losses are independent, the global confidence level achieved could be much better than once in the life of our universe and billions of parallel universes.

Regrettably, the capital charges adding rule is pervasive. It extends to the summation of capital charges for credit, market and op risks. Replacing this falsely safe rule with a more realistic aggregation one would do more for the development of truly risk-sensitive capital charges than many of the refinements and extensions of Basle II.

Unless some of the current suggestions and standards for advanced measurement methods are removed or loosened, few banks will find sufficient incentives to design and implement an advanced measurement approach.

In a follow-up article, we will put forward our own suggestions for op risk measurement models that could meet the criteria of risk sensitivity, consistent calibration and relevance, as well as being cost-effective, conducive to better op risk management and well integrated with other capital requirements.

Working Paper on the Regulatory Treatment of Operational Risk

, from the Basle Committee on Banking Supervision, is available on the Bank for International Settlements’ website, www.bis.org.

Jacques Pézier, a career banker, is honorary professor of Warwick University, UK.
e-mail: jacques@pezier.homechoice.co.uk

Basle’s understanding of advanced measurement methods

In CP2½, the Basle Committee’s risk management group, which drafted the working paper, sets out its current understanding of three types of advanced approaches that might be used by qualifying banks: the internal measurement approach of CP2, the loss distribution approach and scorecards.

The internal measurement approach is a simple extension of the standardised approach, with each of the standardised approach’s eight business lines broken down into seven types of loss event.

CP2½ says loss experience should be collected for the relevant combinations of business line and event type. An exposure indicator should be defined and a formula linking the loss indicator to an op risk charge designed and calibrated against loss experience.

The paper suggests gross income -- but stops short of imposing it -- as the sole exposure indicator and "also proposes" a linear model relating the charge to the loss indicator with proportionality constants, or ‘gammas’. Gammas would be calibrated by individual banks rather than derived by the regulator from industry-wide experience.

The loss distribution approach simply states a goal without the regulator imposing any particular method for reaching it. The approach takes flesh when a particular technique is used to reach the goal such as assessing separately a frequency distribution and a severity distribution for loss events and combining the two to generate the total loss distribution.

Proponents of the scorecard approach claim that it is the only forward-looking approach, but is essentially a qualitative amendment to an internal measurement approach. It may be admissible as a complement to an internal measurement approach, subject to verification.

Banks are encouraged to develop their own op risk assessments, but the regulators impose very strict definitions and qualifying criteria for all three advanced approaches, and indeed for any potential advanced approach that might be developed.

Most importantly, the resulting risk measure for determining regulatory capital should reflect an exposure horizon of one year and a confidence level of 99.9% -- that is, any loss level that has more than one chance in a thousand of occurring over a year.

The risk measure must be supported by loss data and appropriate analytics. There must be regular ‘validation’ of parameter estimates and results based on subsequent loss experience or other techniques.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here