Sponsor’s article > Pan-European Credit Data Consortium Case Study: Credit Data Pooling by Banks for Banks
Institutions seeking to apply the internal ratings-based (IRB) approach under Basel II are required to provide accurate estimates of their credit risks. However, most banks do not have enough internal data on default and recovery to calculate reliable statistics. External sources, where they exist, have not always been appropriate; especially in Europe. To overcome these problems, a group of European banks has decided to pool its credit data. They have formed the Pan-European Credit Data Consortium (PECDC), and have hired Algorithmics to provide a platform and services for the collection, processing, and delivery of the data.
In its first full pooling exercise in April 2006, 14 members of the consortium contributed data, with a further 16 expressing their intent to join the PECDC and participate in later rounds. For many banks the data will not only be useful for Basel II compliance, but will also help meet other business objectives. In fact, it was the desire to meet business rather than regulatory needs that prompted Dutch independent private merchant bank NIBC (formerly NIBCapital) to initiate the data consortium project.
Substantial growth of the investment activities is one of the key pillars of NIBC’s strategy. In 2002, NIBC completed the first securitization of a shipping loan portfolio. But, despite the fact that the portfolio was one of the best on the bank’s loan portfolio, NIBC discovered that its value was not properly recognized in the market price of the collateralized loan obligation (CLO) it issued.
“Banks like to keep their cards close to their chests when it comes to their loan statistics, and the statistics of one bank is not enough to convince the rating agencies of the value of a portfolio,” says Jeroen Batema, Head of Portfolio Management at NIBC. Meanwhile, trying to educate institutions about the behaviour of shipping banks; how well they know and understand their market, know how to take advantage of the investment cycle, and use loan covenant structures to maximize returns, was a daunting task. The alternative was to provide information on the risk in the form of credit statistics; this being the type of information with which banks and other investors are more familiar. The problem was that NIBC’s data alone would not offer sufficient credibility. “Investors want a broader base than that provided by a single institution, so we needed to combine our data with the loss statistics of a large group of banks,” says Batema.
An alternative approach based on using loss statistics of shipping company bonds was rejected by NIBC. “Many securitizations are parameterized using bond statistics, but it is our observation that bank loans behave differently from bonds, not only in relation to loss given default, but also for probability of default,” says Batema.
So NIBC tried to interest other banks in pooling their data to produce combined loss statistics. The first step was to participate in an initiative of the Dutch Bankers Association, but apart from a project with one large bank to collect shipping statistics, which was useful in terms of giving NIBC an idea of what such a collaborative pooling of data would entail, the idea did not take off locally. “By end of 2003, we thought we should look abroad, as many of the asset classes in which we are active are in fact international. If we could find interest from banks in other countries, then we thought the large Dutch banks would probably follow,” says Batema.
Initially, Batema spoke to other smaller specialized lending banks in a number of European countries, only to discover that loss given default (LGD), probability of default (PD), and associated risk statistics were not relevant to them at that time. However, through contacts in the UK, France, and elsewhere, expressions of interest in the project came from a number of larger banks, for example; Barclays, Calyon, JPMorgan, and Royal Bank of Scotland. A meeting was set for June 2004, to which some providers of credit data, including Algorithmics, and some observers, including the European Banking Federation and the European Investment Bank, were also invited.
The meeting took place two days after the central bank governors of the G10 countries endorsed the Basel II framework. “So it was a good time to start these discussions,” says Batema. “But looking at a longer time horizon, we believed that the data would also help with our business by enabling us to get better insight into our risk, and to better prioritize different credit proposals, as well as acting as a bridge to the institutional market.”
The meeting evaluated the European credit data initiatives that were underway at the time, but most banks expressed reservations about the success of these projects. There was a strong feeling amongst the banks that they wanted to control any new project in which they participated. The best solution appeared to be a partnership between a consortium of banks that would contribute data, and a third-party data management specialist.
“The banks were interested in driving the project, but understood that it was not their core competence to pool data, but to be active in risk, so we decided to combine with a third-party that had this [data-pooling] expertise,” says Batema. This approach allowed the banks to apply their credit expertise, and to be active in designing the data resources that they required for their business. “It enabled the banks to contribute their expert knowledge of the value of collateral, how to work with covenants and contracts, and how to value them,” says Batema.
With a clear objective of creating an inter-bank data pool of credit loss data, and agreement on the best way forward, the meeting formally established the Pan-European Credit Data Consortium, with Barclays, BNP Paribas, Dresdner Bank, NIBC and Royal Bank of Scotland forming the management committee, with Batema as chairman. A date in September was set for the next meeting, and a number of potential data management partners were invited to present proposals on pooling of LGD data, which the banks agreed should be their first priority.
Following the presentations, the PECDC selected Algorithmics as its partner. There were several reasons for this choice. Algorithmics already had over seven years experience of collecting loss data in the US through its North American Loan Loss Database, which included both loss given default and probability of default data—PD data was the next priority. Algorithmics also had a good reputation, as well as a profit incentive. “We thought that this was very important,” says Batema. “We wanted a professional organization that would invest in good quality data service, and we believed that the organization would only do this if it could make a profit out of it.”
One of the key criteria requirements of a partner was that it should understand the banks’ intention to control the project. “Algorithmics understood best of all the potential partners that an industry-led initiative had the best chance of success,” says Batema. Algorithmics agreed to abandon its own initiative that it already had underway to collect credit data in Europe, and adopted the PECDC business model for a bank-controlled data pool.
A contract between the banks was signed in December, and between then and June 2005 the partners worked on a template for the data, defined the pooling structure and process, and drew up the contract with Algorithmics.
The banks decided to collect the LGD data from 1998 for eight asset classes: three regional—small- and medium-sized enterprise (SME), large corporates and real estate; and five global—project finance, commodities, shipping, aircraft and banks. The observation data would be collected at four points in the lifecycle of each loan—at the date of origination, one year before default, at default, and at resolution. Other information gathered includes the rating of the counterparty, the nature of the collateral and guarantees, the exposure at default (EAD) and value of the collateral and the details of each recovery cash flow following default.
“Information on the cash flows between the moment of default and the moment of [resolution] is very important so that you can get a complete view of what kind of payments were made between the obligor and the bank concerning a loan, and the source of the payments, for example from the sale of collateral,” says Batema. This enables a bank to look at different collateral types, and see what kind of cash flows were received given a certain value at one year prior to default, and so on.
Together with Algorithmics, the banks created a single template for all asset classes, greatly increasing the efficiency of data extraction and delivery to the pool, says Batema. Mechanisms were also established to render the data anonymous. Borrower identity is protected by not including borrower names, and bank identities are protected by aggregating the data, and not assigning a country code to data unless there are at least three banks for a particular asset class for a particular country.
Once the data has been collected, individual exposure, loss, and recovery values are calculated for each loan. Aggregate statistics are then published to participating PECDC member banks by industry sector, borrower type, and country, in accordance with the consortium’s rules.
The PECDC established the rules of participation. A bank has to contribute data from its European or global portfolios. It can choose which asset classes its sends data for, and from what period—although the latest acceptable start date was set at 2000. Banks will only receive back statistics for the classes and time periods to which it contributes data.
“It is very important for us to avoid free-riders because it will kill motivation,” says Batema. A bank will not automatically qualify for membership if it applies. First, it has to prove that it can meet the consortium’s standards in terms of quantity and quality of data. “A bank has to make a significant statistical contribution to the whole pool of data. It is a burden for large banks to deliver their data so they want to make sure that those who receive statistics from their data make a similar contribution,” he says. The consortium relies on Algorithmics to interpret and implement the checks on quality and quantity, and to monitor and report on member banks’ adherence to consortium standards and protocols.
The first pilot pooling occurred in November 2005, and two UK, two Dutch, and two German banks participated. Although a huge amount of data was collected, the PECDC was not able to publish very granular statistics, because there were fewer than three banks contributing from each country.
Nevertheless, the exercise was an important achievement, proving the concept and the process, says Batema. The pilot was followed by the first production pooling of data in April 2006. This was a great success, with 18 banks delivering data representing more than 50 times more default observations on an annual basis than that available from the bond markets.
So what will the data be used for? Initially, most banks will use it for benchmarking—checking that internal credit statistics are in line with the market. “As the quality of data improves, and as we get more data on defaulted loans, banks will probably use the statistics for calibrating their loss given default and probability of default models,” says Batema. Banks are starting to collect more data on their loans than they used to, and so statistics in the future will be more appropriate for setting the parameters of models. For now, the PECDC has two categories for historical data—optional and required. However, for data from 2005 all fields will be compulsory.
Algorithmics has played a critical role in getting the PECDC data-pooling project off the ground. The company clearly understood how the consortium wanted to operate, and were proactive in responding to the group’s initiative, committing resources and applying their expertise from the earliest stages, says Batema. Algorithmics visited the participating banks, providing advice on how to improve data quality, as well as facilities for banks to test their data before submission. They created the data template, database, and data collection, processing and publication infrastructure, helping to define the standards and methodologies to be used. All of this was accomplished within the very tight deadlines imposed on the project.
As the credit data-pooling project takes off, regulators have begun to take an interest. The PECDC has given a presentation to the Validation Subgroup of the Accord Implementation Group of the Basel II Committee. The response was positive, says Batema. If the PECDC data template and statistics evolve into a de facto standard for the industry, it will be helpful for both banks and those who supervise them.
“Inter-bank data pooling is an accurate, reliable, and cost-effective way of creating empirical data sets required to help banks estimate Basel II risk components,” says Batema. Ultimately, the PECDC’s data pooling could help commoditize credit risk—which was NIBC’s objective in the first place, says Batema.
“The more that a bank can decrease the information asymmetry between itself and investors by giving them access to reliable credit statistics, whereby the investors are able to asses the risk they are taking, the more confident they will be about investing and, the more willing they will be to pay a higher price,” he says.
The PECDC Case Study is one in a series of Algorithmics’ case studies. For information about Algorithmics and to view other client case studies, visit www.algorithmics.com.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net
More on Basel Committee
FRTB implementation: key insights and learnings
Duncan Cryle and Jeff Aziz of SS&C Algorithmics discuss strategic questions and key decisions facing banks as they approach FRTB implementation
Basel concession strengthens US opposition to NSFR
Lobbyists say change to gross derivatives liabilities measure shows the whole ratio is flawed
Basel’s Tsuiki: review of bank rules no free-for-all
Evaluation of new framework by Basel Committee will not be excuse for tweaking pre-agreed rules
Pulling it all together: Challenges and opportunities for banks preparing for FRTB regulation
Content provided by IBM
EU lawmakers consider extending FRTB deadline
European Commission policy expert says current deadline is too ambitious
Custodians could face higher Basel G-Sib surcharges
Data shows removal of cap on substitutability in revised methodology would hit four banks
MEP: Basel too slow to deal with clearing capital clash
Isda AGM: Swinburne criticises Basel’s lethargy on clash between leverage and clearing rules
Fears of fragmentation over Basel shadow banking rules
Step-in risk guidelines could be taken more seriously in the EU than in the US