Why machine learning quants need ‘golden’ datasets
An absence of shared datasets is holding back the development of ML models in finance
Today’s computers are able to tell the difference between all manner of everyday things – cats and dogs, fire hydrants and traffic lights – because individuals have painstakingly catalogued 14 million such images, by hand, for the computers to learn from. Quants think finance needs something similar.
The labelled pictures used to train and test image recognition algorithms sit in a publicly available database called ImageNet. It’s been critical in making those algos better. Developers are able to benchmark their progress by their success rate in categorising ImageNet pictures correctly.
Without ImageNet, it would be far tougher to tell whether one model was beating another.
Finance is no different. Like all machine learning models, those used in investing or hedging reflect the data they have learnt from. So comparing models that have been trained on different data can tell quants lots about the data, but far less about the models themselves.
Measuring a firm’s machine learning model against other known models in the industry, or even against different models from the same organisation, becomes all but impossible.
The idea, then, is to create shared datasets that quants could use to weigh models one against another. In finance, it’s a more complex task than just collecting and labelling pictures, though.
For one, banks and investing firms are reluctant to share proprietary data – sometimes due to privacy concerns, often because the data has too much commercial value. Such reticence can make collecting raw information for benchmark datasets a challenge from the start.
Secondly, the new “golden” datasets would need masses of data covering all market scenarios – including scenarios that have never actually occurred in history.
This is a well-known problem affecting machine learning models that are trained on historical data. In financial markets the future seldom looks like the past.
If the dataset you train your model on resembles the data or scenarios it encounters in real life, you’re in business. If it’s significantly different, you don’t know what the model is going to do
Blanka Horvath, Technical University of Munich
“If the dataset you train your model on resembles the data or scenarios it encounters in real life, you’re in business,” says Blanka Horvath, professor of mathematical finance at the Technical University of Munich. “If it’s significantly different, you don’t know what the model is going to do.”
The solution to both problems, quants think, could be to create some of the benchmark data themselves.
Horvath, with a team at the TUM’s Data Science Institute, has launched a project called SyBenDaFin – synthetic benchmark datasets for finance – to do just that.
The plan is to formulate gold standard datasets that reflect what happened in markets in the past but also what could have happened, even if it didn’t.
Synthesising data in this way is increasingly common in finance. Horvath, in another project, carried out tests on machine learning deep hedging engines, for example, by training a model on synthetic data and comparing its output against a conventional hedging approach.
Quants say it would be too complex to formulate a universal dataset comparable to ImageNet for all types of finance models.
The market patterns that would test a model that rebalances every few seconds, for example, would be different from events that would challenge a model trading on a monthly horizon.
Instead, the idea would be to create multiple sets of data, each designed to test models created for a specific use.
Benchmarks could help practitioners grasp the strengths and weaknesses of models as well as whether changes to a model bring improvement or not.
Regulators, too, stand to benefit. Potentially, they could train models using the gold standard data and see how well they perform versus the same model trained on a firm’s in-house data.
In a paper last year, authors from the Alan Turing Institute and the Universities of Edinburgh and Oxford said the industry today had little understanding of how appropriate or optimal different machine learning methods were in different cases. A “clear opportunity” exists for finance to use synthetic data generators in benchmarking, they wrote.
“Firms are increasingly relying on black-box algorithms and methods,” says Sam Cohen, one of the authors and an associate professor with the Mathematical Institute at the University of Oxford and the Alan Turing Institute. “This is one way of verifying our understanding of what they are actually going to do.”
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net
More on Investing
Yen rise spurs Japanese rates market surge
Traders are moving on an expectation of increased yen volatility in 2025
Diversification of LDI liquidity buffers sparks debate
Funds using credit assets to top up collateral waterfall, but some risk managers are sceptical
Calpers adds machine learning specialist Simonian
Champion of using AI and game theory in investing risk management joins US public fund
Lots to fear, including fear itself
Binary scenarios for key investment risks in this year’s Top 10 are worrying buy-siders
Risk.net’s top 10 investment risks for 2025
Fresh concerns this year include a trade war, a stock market crash and growing social discord
Review of 2024: as markets took a breather, firms switched focus
In the absence of major crises and rules deadlines, financial firms revamped strategy, services and practices
Pimco and Vanguard slash FX forwards trading with BNP Paribas
Counterparty Radar: French bank sees its notional with mutual funds halve
BlackRock on how modern portfolio theory is misunderstood
Standard asset allocation is likely sub-optimal in a changing world, say strategists