Why machine learning quants need ‘golden’ datasets
An absence of shared datasets is holding back the development of ML models in finance
Today’s computers are able to tell the difference between all manner of everyday things – cats and dogs, fire hydrants and traffic lights – because individuals have painstakingly catalogued 14 million such images, by hand, for the computers to learn from. Quants think finance needs something similar.
The labelled pictures used to train and test image recognition algorithms sit in a publicly available database called ImageNet. It’s been critical in making those algos better. Developers are able to benchmark their progress by their success rate in categorising ImageNet pictures correctly.
Without ImageNet, it would be far tougher to tell whether one model was beating another.
Finance is no different. Like all machine learning models, those used in investing or hedging reflect the data they have learnt from. So comparing models that have been trained on different data can tell quants lots about the data, but far less about the models themselves.
Measuring a firm’s machine learning model against other known models in the industry, or even against different models from the same organisation, becomes all but impossible.
The idea, then, is to create shared datasets that quants could use to weigh models one against another. In finance, it’s a more complex task than just collecting and labelling pictures, though.
For one, banks and investing firms are reluctant to share proprietary data – sometimes due to privacy concerns, often because the data has too much commercial value. Such reticence can make collecting raw information for benchmark datasets a challenge from the start.
Secondly, the new “golden” datasets would need masses of data covering all market scenarios – including scenarios that have never actually occurred in history.
This is a well-known problem affecting machine learning models that are trained on historical data. In financial markets the future seldom looks like the past.
If the dataset you train your model on resembles the data or scenarios it encounters in real life, you’re in business. If it’s significantly different, you don’t know what the model is going to do
Blanka Horvath, Technical University of Munich
“If the dataset you train your model on resembles the data or scenarios it encounters in real life, you’re in business,” says Blanka Horvath, professor of mathematical finance at the Technical University of Munich. “If it’s significantly different, you don’t know what the model is going to do.”
The solution to both problems, quants think, could be to create some of the benchmark data themselves.
Horvath, with a team at the TUM’s Data Science Institute, has launched a project called SyBenDaFin – synthetic benchmark datasets for finance – to do just that.
The plan is to formulate gold standard datasets that reflect what happened in markets in the past but also what could have happened, even if it didn’t.
Synthesising data in this way is increasingly common in finance. Horvath, in another project, carried out tests on machine learning deep hedging engines, for example, by training a model on synthetic data and comparing its output against a conventional hedging approach.
Quants say it would be too complex to formulate a universal dataset comparable to ImageNet for all types of finance models.
The market patterns that would test a model that rebalances every few seconds, for example, would be different from events that would challenge a model trading on a monthly horizon.
Instead, the idea would be to create multiple sets of data, each designed to test models created for a specific use.
Benchmarks could help practitioners grasp the strengths and weaknesses of models as well as whether changes to a model bring improvement or not.
Regulators, too, stand to benefit. Potentially, they could train models using the gold standard data and see how well they perform versus the same model trained on a firm’s in-house data.
In a paper last year, authors from the Alan Turing Institute and the Universities of Edinburgh and Oxford said the industry today had little understanding of how appropriate or optimal different machine learning methods were in different cases. A “clear opportunity” exists for finance to use synthetic data generators in benchmarking, they wrote.
“Firms are increasingly relying on black-box algorithms and methods,” says Sam Cohen, one of the authors and an associate professor with the Mathematical Institute at the University of Oxford and the Alan Turing Institute. “This is one way of verifying our understanding of what they are actually going to do.”
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net
More on Investing
To liquidity and beyond: new funding strategies for UK pensions and insurance
Prompted by policy shifts and macro events, pension funds and insurance firms are seeking alternative solutions around funding and liquidity
Why the Basel III rollback won’t halt US risk transfer deals
New structures could free up reserves as well as regulatory capital, says lawyer who helped launch market
Beware the macro elephant that could stomp on stocks
Macro risks have the potential to shake equities more than investors might be anticipating
Should trend followers lower their horizons?
August’s volatility blip benefited hedge funds that use short-term trend signals
Are investors betting on Kamala or Donald? Neither
Hedge funds and others shun election-based trades and rely on existing hedges to guard against surprise market moves
Rob Arnott finds a ‘sweet spot’ for public spending
Veteran buy-sider sees an investing case for small government
August’s volatility thunderbolt rattles risk managers
Investment firms mull changes to value-at-risk models after never-before-seen spike in volatility index
Investors back away from vanilla dispersion ahead of US vote
Broader trades and new versions of go-to strategy are proving more popular, say bank QIS teams