Machines can read, but do they understand?
How a novel NLP application built on Google’s Bert transformer model can help predict ratings transitions
In the quest to give human abilities to machines, reading text and assimilating its context is a crucial step. Understanding spoken or written language allows machines to process large amounts of unstructured data such as news or documents – freeing humans from the billions of man-hours of drudgery necessitated before they can be used to inform decision-making.
The theoretical foundations for natural language processing (NLP), a branch of artificial intelligence that studies how to process textual data, were laid decades ago. Many real-life applications are already taken for granted: spam filters, voice assistants, translation apps and chatbots.
But NLP’s applications in finance have only been developed much more recently. Banks have mostly used it to commoditise applications such as robo-advisory, or to credit-score applicants for credit cards and mortgages. The potential, however, is far greater.
Emanuel Eckrich, director and head of corporate rating methodologies at Deutsche Bank in Frankfurt, has used the technique to predict credit events such as rating downgrades or defaults. The initial motivation was to support the rating team during the large wave of ratings transitions heralded by the Covid-19 pandemic.
Eckrich and his co-authors, Phillip Escott, Rainer Glaser and Christoph Zeiner – all of Oliver Wyman – present a model trained to spot financial information and relevant news in a vast sea of noise that could trigger ratings changes.
The approach has the potential to anticipate rating transitions that analysts might otherwise have spent considerable time determining.
“By supporting the analysts with an early credit warning, it frees up capacity and makes the process more efficient,” says Eckrich. “In particular, with regards to the Covid-19 application, I found it surprising how robust the approach proved to be. Even without making it very specific, the model performed.”
An NLP model’s job is to take unstructured data – typically text, or a mixture of text and numerical information – and turn it into structured data, a format that machines can process, so it can rapidly run the rule over them. To do this, it needs to use a so-called transformer model.
By supporting the analysts with an early credit warning, it frees up capacity and makes the process more efficient
Emanuel Eckrich, Deutsche Bank
Eckrich used Google’s bidirectional encoder representations from transformers – Bert, for short – an open-source neural network-based technique released in 2019 that has already become the industry standard off-the-shelf product for NLP applications. Google’s own search function uses it.
Bert has been trained on a humongous amount of textual data – including decades’ worth of news stories and the entirety of Wikipedia – using a novel ‘bidirectional’ technique that allows the information to move up and down the network, learning the connections between each word and those next to it, thus building a genuine understanding of how language is used in real life.
This creates a representation of language with inbuilt contextualisation – a significant advantage over traditional frequency-based, or ‘bag of words’, approaches – which rely on counting words or short phrases in isolation, in an attempt to highlight key ones. These can be used, for example, to identify emails containing phrases such as ‘you have been selected’ or ‘earn extra cash’ as spam.
Eckrich takes Bert’s outputs as inputs for his model. To make it work for the purpose of rating predictions, two key elements needed to be added. “The first is an approach to reduce noise in the data – and news flow data obviously is particularly noisy,” he explains.
The second element was applying ML classifiers on Bert’s output; these select the relevant information that could affect credit ratings.
Developing all these components of a methodology in-house is unusual, quants note. “Building a pipeline of this kind is a very complex project and parts of it are often outsourced. ‘De-noising’ and feed selection are normally performed by different companies,” says Alexander Denev, former head of AI for risk advisory and financial services at Deloitte.
The limits of NLP
Adopting NLP does come with caveats.
“Besides obvious benefits like speed and performance of using pre-trained NLP transformer models in finance, one should be aware of the potential drawbacks, like [lack of] explainability, data biases and robustness to adversarial examples,” says Nino Antulov-Fantulin, co-founder and head of research at Aisot, an ETH-Zurich spin-off specialised in AI for finance.
Eckrich and his team acknowledge this; the group worked to prove the interpretability of their model by studying and identifying the purpose of different sections of their network. But Eckrich is wary of the limits of NLP – the model cannot be used as a standalone rating engine, he says, and it may struggle to pass the scrutiny threshold needed for modelling regulatory capital.
“For this kind of approach, at the moment, there is probably significant regulatory hesitation regarding applications like determining capital requirements. A neural network which transforms text into a 768-dimensional vector space – and that was trained on an unspecific dataset – might be difficult to accept for a regulator,” admits Eckrich.
“The time when machines will analyse data autonomously and come to the conclusion without human input has not come yet in credit risk; for now they can only support humans,” he concludes.
Machines can indeed understand what they read – now, they just need to be instructed on what to do with it.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net
More on Our take
Quants dive into FX fixing windows debate
Longer fixing windows may benefit clients, but predicting how dealers will respond is tough
Talking Heads 2024: All eyes on US equities
How the tech-driven S&P 500 surge has impacted thinking at five market participants
Beware the macro elephant that could stomp on stocks
Macro risks have the potential to shake equities more than investors might be anticipating
Podcast: Piterbarg and Nowaczyk on running better backtests
Quants discuss new way to extract independent samples from correlated datasets
Should trend followers lower their horizons?
August’s volatility blip benefited hedge funds that use short-term trend signals
Low FX vol regime fuels exotics expansion
Interest is growing in the products as a way to squeeze juice out of a flat market
Can pod shops channel ‘organisational alpha’?
The tension between a firm and its managers can drag on returns. So far, there’s no perfect fix
CDS market revamp aims to fix the (de)faults
Proposed makeover for determinations committees tackles concerns over conflicts of interest