Skip to main content
outsourcing cogs - Getty.jpg

Outsourced model validation: is it viable?

Consortium promises cost savings in outsourcing model validation, but for some model risk pooling doesn’t float

  • Facing increased regulatory oversight and squeezed by costs, banks are considering outsourcing some model validation tasks.
  • Crisil, a unit of S&P, is working with four large banks to create standardised components for repetitive processes such as data cleansing, workflow automation, and backtesting.
  • Some outsiders question the value of outsourcing, saying it raises issues of control and governance. While automation can reduce costs, banks can do this on their own, they say.
  • Regulators, particularly in the US, look askance at outsourcing of model validation because it could run against the principle of having an independent challenge and review.
  • In Europe, where regulators are focused on the Basel capital models, banks have a greater incentive to reduce overhead via standardisation than US banks, which are more concerned with models used for decision making.

Cost pressures, a multiplicity of models and heightened regulatory requirements are leading some banks to consider outsourcing the more mundane and repetitive aspects of model validation to third parties. A group of European banks is working with Standard & Poor’s to create a risk model utility.

But there are real questions around whether such a thing can work in practice – especially for US banks.

Crisil, a unit of S&P, is working with HSBC and three other large European banks on a risk model utility (RMU) they believe can trim model development and validation costs by as much as 20%.

“The idea behind this utility is to uniformly identify standardised processes and be centralised through an industry resource to do it effectively, so the bank doesn’t have to replicate it internally,” says a model risk executive at one of the Crisil banks.

Crisil’s RMU is intended to help banks develop and validate models by creating and sharing basic building blocks, such as model development workflow, common data and supporting documentation. The project has been underway for about a year, and is planned to go public early in 2020. The idea is to alleviate banks’ need to build massive infrastructures to support their model libraries, potentially unlocking massive cost savings.

But the attractions of model risk pooling are not unalloyed. Sceptics point to questionable savings in time and other resources, as well as issues of complexity in automating model validation, such as the handling of portfolios. Also, US regulators have not been as welcoming of the development as their European counterparts, possibly because they are not as concerned with models for calculating risk-weighted assets.

There is increasing focus on model validation from both regulatory and internal risk management demands. In the US, the Federal Reserve Board’s supervisory letter SR 11-7 lays out expectations for key aspects of model validation, including model development, ongoing monitoring and backtesting. The European Central Bank’s targeted review of internal models (Trim) has similar goals.

Running in place

Validation is running just to stand still, because the models are sophisticated and regulators’ requirements are rigorous. While it has resulted in massive investments in analytical tools, banks are still unable to keep pace with demands. The days when a quant could create a spreadsheet and the bank would trade billions of dollars of notional in derivatives without validating the source code are long gone.

Model validation teams have grown from statisticians who ensure the conceptual soundness of the model, to encompass other highly specialised roles. Teams now require data quality experts to ensure the integrity of the inputs to models. “We are far beyond the point where model validation meant looking at a couple of formulas. There are now explicit requirements to not only look at the mathematics of the model, but to consider data uncertainty,” says Thomas Obitz, director of RiskTransform, a risk management consulting company. “Regardless of whether you are sourcing this internally or externally, model validation needs to draw on a broad set of skills.”

Many banks are looking to optimise model development and validation. This has increased the appeal for third-party providers such as ModVal, which offers a free library of model validation software tools. The platform, introduced in 2014, has thousands of users representing several hundred banks. 

“For FRTB [Fundamental Review of the Trading Book] as well as [Basel’s] SA-CCR and other regulatory frameworks, you have to be a member of the priesthood to understand it,” says Alexander Sokol, founder of CompatibL, a risk analytics company that created ModVal. “A lot of things are unwritten or unclear. We make it easier to set up desks.”

Meanwhile, Crisil is attempting to create a consortium-based approach towards standardising some elements of model development and validation. The lifecycle of a model involves an initial development phase of gathering then analysing data and developing the model. Banks use components for scrubbing data to creating algorithms, some of which could be standard routines invoking code residing in libraries. Documentation also needs to be created, and parts of those components are standardised. During the testing and validation phase, challenger models are built and the models are backtested and monitored for performance. Each of these components has at least some element of standardisation.

Banks plan to use Crisil as a third-party resource to outsource some of the components. These include preparing and analysing data and the iterative process of running algorithms and analytics on the data, which creates the models. Once the bank has decided it has a model it wants to put into production, the next stage is deployment, followed by monitoring of the outputs by independent validation teams.

The value of such an arrangement rests on the fact that many of the models banks use are similar, if not identical. By creating reusable components, the consortium allows model development to be performed only once instead of by each individual bank. What’s more, ensuring the model is conceptually sound – one of the core aspects of validation – can be reused as well.

“The test of whether the mathematics of the model makes sense only has to be done once if you use the model for several banks in such a utility,” says Obitz.

We are far beyond the point where model validation meant looking at a couple of formulas
Thomas Obitz, RiskTransform

Beyond ensuring the model’s integrity, a consortium could provide a centralised pool of resources, such as data quality experts to test how the model will perform against each bank’s actual portfolio. Here, the business case is less about automation than classic outsourcing – that is: relieving banks of having to expand their validation teams. “The value is: you have specialists who can do the job, which before was done by one team in every bank. These specialists do certain types of tests on the data in a standardised way and it is more effective than if each bank does that on their own,” says Obitz.

Worth the effort?

But how effective can validation outsourcing be? Do the necessary rigours of testing and maintaining reasonable parameters outweigh the benefits by being almost if not just as demanding?

While the effort is motivated chiefly by a desire to reduce costs, outside experts are questioning how the savings can be realised. “My experience with outsourcing is I spend more time reviewing the reports from the outsource team. I don’t save much time by outsourcing validation,” says Lourenco Miranda, head of model risk management, Americas, at Societe Generale.

Due to the number and complexity of models – which are used to make credit decisions, determine regulatory capital requirements, evaluate portfolio risk, assess operational risk, among many other uses – validation necessarily entails a high degree of human intervention. Even automating repetitive tasks requires a great deal of judgement.

“Automating validation is not an easy task. We have been doing this the last three years for our own models,” says Agus Sudjianto, head of corporate model risk at Wells Fargo. “We are building our own platform to do some kind of standardised automated testing. It’s very hard even though we know the model and have control over that.”

While supportive of the idea of sharing modules for model validation, outside experts say there are potential limitations as well. In pooling data, for example, regulators insist data should be representative of the portfolio and trading activity of the individual bank. For example, if a model is used to calculate probability of default for some portfolios in Asia, Asian regulators may insist data from Asian counterparties be used for calibration of that model. “Global organisations operating across different regulatory regimes may have to apply a targeted calibration of models to different markets. Use of a global pool of data may not be allowed,” says Slava Obraztsov, global head of model risk at Nomura.

A question of control

Outsourcing models, however compelling the business case, raises the question of who controls the model – the developer or the bank itself. “It might indicate a more general issue. As long as models are developed outside the full house control, what the governance and controls are is not clear,” says Obraztsov. “This is a crucial point that the house should be fully in control of model application, especially for material or important models.”

All models need to apply specifically to internal portfolios. Outsourcing model validation might be appropriate for models that are very narrow in scope, but whenever a model has a material impact, it needs to be vetted internally by the bank, say outside experts.

For portfolio models, it makes no sense to outsource validation. You need internal expertise
Peter Quell, DZ Bank

For example, an analogy can be drawn between credit risk rating models that apply to a single issuer and portfolio models, which are statistical simulation techniques that apply to the complete portfolio.

“For portfolio models, it makes no sense to outsource validation. You need internal expertise. For individual ratings models, it might be easier to outsource,” says Peter Quell, head of portfolio model development at DZ Bank.

A model risk executive at a large US bank that is not part of the Crisil consortium notes: “My number one concern with outsourcing validation to a third party is: will this third party be able to adapt to the standards of HSBC or Santander or whatever else bank is using it? I don’t see the benefit of having a standardised validation. Once it becomes specific it becomes a consulting project.”

Rather than pass responsibility on to a third party, banks could achieve the same benefits by automating processes themselves. To use market risk as an example – in validating a market risk model, a bank will typically perform backtesting and compare the results with its value-at-risk for each trading day of the past year for different portfolios. There are two datasets, one for the VAR forecast and one for the actual results, then the two datasets are joined. This can be automated very rapidly, and the model validator can interpret the results. This can save a great deal of time for the validator to think more creatively about how the model can go wrong. The time saved could be invested in critical examinations of the risk model.

Many banks have automated parts of model validation using machine learning technology. An example is workflow automation, where reports are generated from the databases that are populated each day. However, it’s difficult to fully automate because at some point human intervention is needed. “There are standard tools to automate. Banks can do that by themselves. Interpreting the results can’t be automated, but someone with expertise has to come to a conclusion as to what the validation results mean,” says Quell.

Europe vs US

The banks involved in Crisil are all Europe-based, and some question whether it would ever fly with US regulators. This can partly be attributed to a perceived difference in mindset. European regulators are much more focused on the Basel models, compared with US regulators, who have imposed standardised floors and are therefore not primarily concerned with models for calculating risk-weighted assets.  

Banks, taking their cue from regulators, have set their priorities accordingly. With European banks more constrained by the Basel models, they are understandably looking for ways to reduce costs. “European banks don’t buy into model risk management. They look at it as a burden. If they can get away from the pain of regulation, they will do,” says a model risk executive at a major US bank.

The models US banks care about are not the Basel models but those used for decision-making, such as anti-money-laundering or online banking. Many banks use machine learning-based models that offer products to customers when they log in to their online banking accounts. Such models can create reputational risk because they raise potential discrimination and fair-lending issues. It’s therefore a different focus between US and European banks. 

“European regulators care about making the Basel model more consistent. Our experience with US regulators on their expectations for model validation [is that] having a generic validation will not fly because they are putting the emphasis on how this applies to your own portfolio and exposure. So, it’s a very different view,” says the risk model executive at the US bank. 

Regulators in general are wary of outsourcing model development and testing, out of concern that it could compromise the need for independence. Although the European Central Bank endorses the concept of banks using pooled data and models for credit risk, it imposes strict requirements on such pooling. For example, banks must be able to demonstrate that the pooled data is representative of their loan portfolios, and that they are capable of independently managing and maintaining the model.

The UK’s Prudential Regulation Authority, in guidance for risk management of stress-testing models, requires banks to perform “appropriate model validation and independent review activities to ensure sound model performance and greater understanding of model uncertainties”.

Adds the risk executive at the US bank: “European regulators are way more lenient. I see more resistance here locally. It’s going to be more difficult to overcome 11-7 expectations than Trim.”

From the standard to the particular

When tools are available, it’s positive that they can be shared. However, it’s also important for banks to maintain flexibility to adapt those tools for their own needs. An example is the Standardised Initial Margin Model (Simm) for non-cleared OTC derivatives, introduced by the International Swaps and Derivatives Association in 2016 to satisfy regulatory requirements on initial margin. It was simple and robust yet transferable, lending itself to standardisation. Unlike regulatory capital requirements, which are determined individually by each bank, initial margin is determined bilaterally between counterparties. It therefore made sense to have a global standard for initial margin.

While the objective of Simm was clear, as with any general approach, it proved more suitable for large, well-diversified portfolios than for portfolios with specific concentration risk. “Even with standard models like Simm, when you implement it, it becomes a very particular model. So to standardise it is very complex,” says Societe Generale’s Miranda.

Nomura encountered resistance from Isda to changing the model. When it initially validated the Simm model in 2016, it raised a number of recommendations on how the model needed to be improved. But it wasn’t able to get them approved by the Isda governance committee, because its recommendations may not be relevant to other banks and didn’t satisfy specific criteria Isda had put in place for model changes to be incorporated. With the latest release of the Simm model in December, one of Nomura’s recommendations has been addressed. “They maintain strong industry-wide requirements on whether models should be changed. Some changes might suit only specific houses, so model governance is affected,” says Obraztsov.

Editing by Louise Marshall 

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here