Modelling cyber risk: FAIR’s fair?
Proponents say factor analysis can be applied to cyber risk; detractors retort results are still guesswork
Of all the potential loss events banks’ operational risk managers are tasked with trying to model, cyber attacks are among the most challenging.
Firstly, say op risk managers, there’s the sheer range of cyber threats banks are exposed to, and the wide variability in the frequency with which they occur. Distributed denial-of-service attacks, viruses and email phishing threats are everyday occurrences for a bank; successful ransomware attacks and data breaches that result in large data thefts are – for now – relatively uncommon.
Modelling the frequency of any potential op risk loss event is difficult; but practitioners argue this is especially true for cyber, for three reasons.
“Firstly, I think there is a non-linear relationship between controls and losses, as the controls are only as good as the weakest link,” says one senior op risk manager, citing the examples of staff turning off anti-virus software to download an attachment, or responding to a convincing-looking phishing email.
The relationship could hold the other way, too; a large, sophisticated bank could have inadequate cyber defences – but provided it is perceived to be strong, there is evidence to suggest it is less likely to be a target for cyber attack. The op risk manager cites recent payment network frauds being concentrated on emerging market banks as an example.
Finally, a bank cannot model its exposure to a so-called zero day attack – one that exploits an unknown vulnerability in its cyber safeguards, for which by definition it has no defence.
The loss impact of any of these events is also highly variable. For example, regulatory fines for poor systems and controls processes in the event of a data breach will be set at the discretion of supervisors; banks subject to the European Union’s forthcoming General Data Protection Regulation could be whacked with fines of up to 4% of their annual turnover in the event of a serious breach, or 2% if they simply fail to notify their regulator within 72 hours.
Other losses – ransom payments to cyber thieves, compensation to affected customers, loss of future business due to reputational damage – are also difficult if not impossible to quantify with any accuracy. Small wonder, then, that Rohan Amin, chief information security officer at JP Morgan, describes trying to model the loss a bank can expect from a particular cyber event as “at best, a guess”.
More than a decade after it was first applied to modelling cyber risk, the most commonly used approach to quantifying cyber threats among banks remains the Factor Analysis of Information Risk (Fair) model. The approach provides a straightforward map of risk factors and their interrelationships, with its outputs then used to inform a quantitative analysis, such as Monte Carlo simulations or a sensitivities-based analysis.
Many underwriters are not doing any underwriting at all. They’re simply saying, ‘this is my price for the [policy] limit’. It’s rather scary
John Elbl, Air
Proponents say the approach helps banks order and prioritise their defences against the myriad threats they face; detractors say its outputs are only as reliable as the inputs, which, due to the nature of the threats in the case of cyber risk, are inherently based on guesswork.
Shorn of a way of predicting losses accurately, banks may look to the traditional risk-transfer medium of insurance – though underwriters have long struggled to model the potential impact of cyber threats too. Modelling techniques have evolved rapidly in the past couple of years, firms say; it is now common for underwriters to tap the services of catastrophe modelling firms – companies more used to assessing potential losses from natural disasters – as well as niche cyber security firms, who can use a range of covert techniques such as ethical hacking to assess a potential clients’ defences.
However, amid swelling demand from banks for cyber cover, some fear underwriting standards have gone backwards: “Many underwriters are not doing any underwriting at all. They’re simply saying, ‘this is my price for the [policy] limit’. It’s rather scary,” John Elbl, cyber risk expert at Air, a catastrophe modelling firm, tells tells Risk.net.
Banks are all too cognizant that insurance can only ever be a loss mitigant, and not a defence against a potentially existential threat. As Gilles Mawas, senior expert in cyber, IT and third-party risk at BNP Paribas, recently put it: “Being reimbursed after you’re dead is irrelevant. If you lose €3 billion–5 billion ($3.4 billion–5.6 billion) and two years later you get back 50%, what’s the point?”
Further reading
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net
More on Our take
Podcast: Alexei Kondratyev on quantum computing
Imperial College London professor updates expectations for future tech
Quants mine gold for new market-making model
Novel approach to modelling cointegrated assets could be applied to FX and potentially even corporate bond pricing
Thin-skinned: are CCPs skimping on capital cover?
Growth of default funds calls into question clearers’ skin in the game
Quants dive into FX fixing windows debate
Longer fixing windows may benefit clients, but predicting how dealers will respond is tough
Talking Heads 2024: All eyes on US equities
How the tech-driven S&P 500 surge has impacted thinking at five market participants
Beware the macro elephant that could stomp on stocks
Macro risks have the potential to shake equities more than investors might be anticipating
Podcast: Piterbarg and Nowaczyk on running better backtests
Quants discuss new way to extract independent samples from correlated datasets
Should trend followers lower their horizons?
August’s volatility blip benefited hedge funds that use short-term trend signals