Artificial intelligence and the future of financial regulation

Software has taken over from humans in trading – and will spread further in the years to come. Regulators will need to prepare for a faster, darker industry

kilobots

At some point in the last few years, the human stock trader became an endangered species. Most trades on the world's equity markets are now conducted by machines – algorithmic trading systems. And research in the areas of machine learning, big data and artificial intelligence promises to change the financial world still more fundamentally in the near future – bringing both new benefits and new challenges for operational risk teams and compliance managers.

Regulatory attention so far in this area has focused mainly on the risks surrounding high-frequency trading (HFT). These fall into categories: first, the danger that HFT could increase market volatility, exacerbate liquidity crises, or have other undesirable systemic effects. Second, the unfairness created by allowing investors with the resources to afford HFT to outperform smaller rivals. Third, the potential for its misuse in market manipulation. Proposed regulatory responses have included enforced delays on trades to negate the advantage of using HFT software, requirements for 'kill switches' to shut off trading once prices move more than a certain amount, and financial transaction taxes to reduce the volume of trades which HFT algorithms produce.

The US Financial Industry Regulatory Authority's chief information officer, Steve Randich, blamed HFT software for the 2010 'flash crash', a rapid drop and recovery in the US Dow Jones Industrial Average, and for the $440 million loss at Knight Capital in 2012 – though other regulators, such as the Australian Securities and Investments Commission, are less worried about the possible risks, at least in the area of market manipulation.

In the blink of an eye

Volatility and price behaviour have certainly changed. Looking at price and trading data from various exchanges, University of Miami researcher Neil Johnson described "an abrupt transition to a new all-machine phase", in which the growth of HFT software had produced a rising number of extremely fast spikes and crashes happening on a scale of milliseconds, far faster than any human trader – or overseer – could react. "Our findings are consistent with an emerging ecology of competitive machines featuring 'crowds' of predatory algorithms," Johnson concluded, though he added that the link between ultra-fast price movements and systemic instability was still far from proven.

In a speech in June this year, Martin Wheatley, chief executive of the UK Financial Conduct Authority, listed the key risks of HFT as "market fairness, market cleanliness and market resilience" – singling out the European Markets in Financial Instruments Directive (Mifid) for including "significant strengthening of algo testing prior to deployment between firms and venues". He added: "There'll also be more focus on systems and controls in firms, with the objective of making sure they understand and take responsibility for the risk they import to the market. On top of this, venues and firms will be required to have circuit breakers, or 'kill switches', to stop runaway algos. And, to mitigate the risk that HFT's high messaging rate overloads the system, European regulators will impose order-to-trade ratios and minimum tick sizes, helping to control noise created by ephemeral orders."

More importantly, though, Wheatley warned that "perhaps closer than we think, learning algorithms and self-improving artificial intelligence [will be] the prime decision-makers in electronic markets." The advantages of machine learning in trading systems – algorithms that can modify their own weightings and strategies as they go, in response to changing market conditions – are obvious, but it raises new challenges. At present, observers point out, regulators are intent on enforcing a rigorous model approval process, requiring sign-off in some cases up to board level, which will no longer be practical with an algorithm that changes on a scale of seconds or milliseconds.

"The problem is quality control weaknesses," says Erozan Kurtas, assistant director of the quantitative analytics unit within the US Securities and Exchange Commission's Office of Compliance Inspections and Examinations (OCIE). "Some banks have good quality control and try to validate all models independently. Some companies don't even have source control or version control systems, or the people who write the algos test them and sign them off – which is a no-no because someone else should test and validate the algorithm."

Another regulator comments: "We wouldn't necessarily rule out [machine learning algorithms] as a class. The controls would likely have to be more robust than for other algorithms because of the risk that the automated adjustments could be made without human intervention and quickly spin out of control in a worst-case scenario. This would include supervision over the testing, implementation, modification, and continuing assessment of their performance to ensure they meet our supervisory requirements and don't result in trading activity that is prohibited."

Blame the machine

This raises the question of what in military terms is called the 'man in the loop' – how does legal responsibility shift as an automatic system becomes gradually more autonomous? One of the first discussions of this issue was published over half a century ago, by the UK lawyer and humourist AP Herbert, who in 1963 described the (fictional) case of a man whose bank, due to a computer error caused by a power failure, falsely reported that he was massively overdrawn. The man sued the bank, the computer manufacturer, the electricity supplier and the computer itself for libel.

After a closely argued trial in which the computer appeared in its own defence, the judge ruled that computers, under English law, are equivalent to tigers or other dangerous animals, and if you have one on your premises then you are liable for any damage it does if it breaks loose. The human owner or host, in other words, retains final legal responsibility.

Herbert was joking, but he managed to predict fairly accurately the state of the law regarding autonomous artificial intelligence – and his article is still cited by many other authors discussing the issue of responsibility as it relates to military and civilian unmanned aircraft, self-driving cars and similar systems.

Glenn Peters, the practice manager for risk analytics software solutions at IBM, points out that the same attitude is likely to apply in financial compliance: "If you are a group of legal advisers and a machine is making decisions about the applicability of regulatory changes against your policies, how do you tell the regulators that the machine decided the impact – where does the human factor come in? The regulators aren't going to hold a machine accountable, they are going to hold a person accountable – and the question of how much review and oversight you provide is still uncharted territory to some degree."

Regulators agree. Andrew Bowden, director of the OCIE, says: "We ask our teams to keep it simple and ask, what is this algo built to do? What edge is it designed to obtain, and is it legal or not? And the human designers will be culpable for that. The next question is, if you didn't have an illegal purpose, did you behave recklessly by unleashing something without proper testing or controls?"

Planning for an algorithm going wrong will become as important as testing them in advance; Knight Capital, which suffered a $440 million loss from faulty trading software, was unprepared for the crisis. As with any other business continuity event, observers argue, companies should rehearse and prepare for a trading algorithm running amok, and make every effort to speed up their reactions in an emergency.

This will be especially the case as algorithm behaviour becomes less predictable. Widespread use of machine learning algorithms exacerbates the problem of a 'machine ecology' producing unexpected emergent behaviour. Emergent behaviour describes the properties of a system that cannot be predicted from studying its parts in isolation – the micro-crashes observed by Johnson could be one example; the co-operation of termites to construct a nest would be another.

kilobots-2

Emergent intelligence: swarms of simple robots (or algorithms) can exhibit unexpectedly complex behaviour

 

With unpredictable emergent behaviour set to appear in rapidly evolving machine ecologies on a scale of milliseconds, regulators may soon have to automate their own market oversight responsibilities to a much greater degree. 'Circuit breakers' which shut down trading after prices move beyond a preset level are a very simple form of automated market stability oversight; more advanced types could see trading activity restricted by automated systems which can detect suspicious or destabilising activity and react in the same millisecond span as the trading systems, rather than the minutes or hours that a human would need.

Computers that sin

Other forms of conduct risk could continue to be an issue even in businesses dominated by software rather than humans. While the collusion between traders at different banks involved in the rate-rigging scandal stemmed from very human motives, a machine trader with the goal of optimising profitability for its desk could collude just as successfully with its peers, given a non-zero-sum game such as a round of rate-setting.

Laboratory studies have already shown that even very simple machine-learning systems can develop ways to communicate and collaborate themselves. In one, reported by Brussels-based academic Christos Ampatzis in 2006, robots given the task of learning to navigate a simple maze rapidly began to communicate by sound in order to help each other find the exit, despite not being programmed to do so. All the robots required was the physical hardware – microphones and loudspeakers – to allow communication, and they would rapidly develop their own ways of using it to co-operate for mutual benefit.

Any system with the ability to read others' bids and offers and place its own would, by definition, be able to communicate with other market players, human or machine. Evolved algorithms have spontaneously developed signalling strategies as a way to work together for mutual benefit even in the very simple environment of a prisoner's dilemma game, where the communication is limited to a single element – "co-operate" or "defect" – per turn.

Could collusion in a market be organised between traders with no other way of communicating than reading each other's public bids? Not only could it be, it has been. Studies (for example, by US academics Peter Cramton of the University of Maryland and Jesse Schwartz of Vanderbilt University) of possible collusion in electromagnetic spectrum auctions by the US Federal Communications Commission in 1994-1998 found that competing bidders, without needing to communicate outside the auction arena, were still able to use several techniques to indicate their preferences for various parts of the spectrum on offer, including tactical withdrawals from bidding in order to punish rivals for competing for favoured offers, and even lodging hidden messages encoded in the last three digits of a bid amount.

The bidders in this case were human rather than machine, of course – but in their decisions to co-operate, defect and punish defection, they acted very similarly to computer algorithms playing prisoner's dilemma, and there are no obvious barriers to a sophisticated algorithm developing the same techniques in the noisier environment of an electronic market. And regulators would have an even harder time tracking it down; unlike humans rigging rates such as Libor, colluding machines are not going to send each other chatty emails asking for the day's fix to be raised or lowered. The only communication would consist of bids and offers, and determining whether a set of algorithms "intended" to rig a market would be not only technically tricky but philosophically uncertain.

Other forms of conduct risk, however, could be reduced as artificial intelligence becomes more capable and more widespread in the industry. Craig Spielmann, head of operational risk for RBS Americas, points out that rogue traders in particular tend to commit their crimes for very human reasons. "Think about the stupid things that I could get involved in – they will be because of personal reasons, because I am worried about losing my bonus so I am doubling down on a trade. A computer doesn't have personal concerns, it won't make emotional decisions. Unauthorised trading is based on emotion – it's guys trying to cover stuff up."

craig-spielmann-1

Craig Spielmann, RBS Americas

 

This kind of loss aversion is only one of many cognitive biases affecting risk management, particularly in the area of operational risk modelling and management, where estimates are dominated both mathematically and psychologically by small numbers of low-frequency, high-impact events – exactly the area of the probability/cost chart where human brains are least able to plan effectively.

Another issue revolves around regulations such as the Volcker rule, which aims to permit hedging activity but prohibit proprietary trading. Working out how to follow it has puzzled bank compliance managers since its introduction; even now that guidance has been issued, the detailed requirements are still unclear. "You tell me what the definition of proprietary trading is, and then I'll know how to comply with it," says Spielmann. The move towards electronic trading of swaps has opened the way for automatic hedging using similar algorithms, potentially raising the same concerns with regard to market-making.

20 seconds to comply

But developments in artificial intelligence might make compliance managers' jobs easier in two important ways. Expert systems such as IBM's Watson gained prominence by beating human contestants in the television quiz show Jeopardy, but their ability to assimilate and correlate vast sets of natural-language information was developed for more serious purposes – principally 'decision support' for human doctors, lawyers and so on. The growth in power and ability of these systems promises to improve compliance management as well.

Further off, compliance managers could be not only supplemented but replaced (in some areas) by software. Pushes for common semantic standards – such as the use of a common set of legal entity identifiers (LEIs) – are justified as making oversight easier for regulators and reporting easier for the companies they supervise. But similar efforts in other areas have been explicitly aimed at automating the task of compliance. Hewlett-Packard's Cloud and Security Lab has been working on automating compliance with privacy rules and regulations for several years. A 2011 survey by HP scientists covered four areas: processing legal and regulatory documents written in natural language, extracting knowledge from them, representing the result in semantic form, and automating compliance using the result.

The same is being tried in pilot form for other areas of financial compliance. IBM's Peters explains: "Some of the cognitive computing environments are able to take a look at the current state of regulations and what changes are applied, and use that to determine what impacts it will have on operations. And if there is an inventory of procedures that an organisation involves, they can put those up against the regulations and say 'which of these will still apply, which will need more controls or changes to make sure it meets the regulations'."

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here