AI ‘lab’ or no, banks triangulate towards a common approach

Survey shows split between firms with and without centralised R&D. In practice, many pursue hybrid path

Credit: Risk.net montage

This is the last in a series of four articles looking at how sell-side front offices are using artificial intelligence. It is based on a survey plus interviews with eight market participants. The other articles in the series can be found here.

On the face of it, banks have split into two equal camps when it comes to the development and roll-out of artificial intelligence: half have a centralised ‘lab’ and half do not.

It’s one of the findings of a Risk.net survey of AI use in sell-side front offices – and it seems to paint a clear picture. One camp believes centralisation creates efficiency, eliminates wasteful duplication, allows good ideas to be implemented widely and at speed. The other believes centralisation is overly bureaucratic, stifling creativity and making it hard to be nimble.

Except it’s not that clear at all. A senior markets technologist with a large US bank says the result “looks about right”, but stresses the reality is more nuanced than the survey suggests.

“People call it different things at different places: some firms have AI labs, some have innovation labs or emerging technology labs. And others have chosen to keep it within the business, or within the groups they operate in,” he says. “I don’t think those differences are necessarily to do with the emergence of gen AI – it’s just the organisational constructs that have developed around emerging technology.”

And although the org charts might look very different, many organisations are seeking to capture the purported benefits of both centralisation and decentralisation.

This final article in a four-part series looks at some of these organisational questions. It also explores governance frameworks – specifically, whether they are ‘supportive’ of AI development. It’s a vague term – intending to capture what is likely to be a tricky balancing act. As a rapidly emerging technology fraught with both risks and opportunities, AI presents banks with the conflicting priorities of responsible control and pedal-to-the-metal creativity. Achieving both at the same time isn’t straightforward. 

Once again, responses were mixed, with almost half of participants saying it was “too soon” to pass a verdict.

 

 

 

 

 

The AI lab question gave respondents two options, and the results were a near-perfect split: roughly half of firms have a central AI R&D team; roughly half do not.

The senior markets technologist explains how these models blur at his own organisation. 

About the survey

Risk.net editors drew up the questions for this survey with input from trading and risk software vendor Murex. The aim was to gather information on how sell-side firms are using AI in the front office, where they are applying it, how it is expected to affect roles, products and competitive standing, and what obstacles these firms are encountering.

There were 90 individual participants, representing 54 organisations. Further demographic detail can be found in the first article in this series.

Murex went on to produce a separate, sponsored article on the results. The firm had no involvement in Risk.net’s own coverage.

“At the enterprise level, there’s a lot of focus on governance and on collecting use cases from across the bank. But then, within each of our core businesses, there’s also a degree of pseudo-centralisation,” he says. “My team works in the markets business: we collect different use cases across different asset classes, we try to help with the roll-out of tools, and we also try to play around with some ideas ourselves and build stuff for the rest of the markets business.”

It’s a similar story at ING. One of the particular challenges with technology change in markets businesses is that they tend to be siloed, with little interaction between the asset classes. Centralisation can backfire, says Stephane Malrait, who led innovation for the bank’s markets business until moving to Etrading Software last month: “Because you end up with a disconnect – people continue with their day-to-day activity and don’t engage with changes that are being pushed their way from outside.”

As a result, the bank has adopted both models. The trading desks have their own resources, which they have used to develop machine learning algorithms for automated market-making. ING’s central function is working on six big use cases, which will be developed at the group level, then rolled out to relevant businesses. “So, [they] have both, in fact,” says Malrait.

A large European dealer is trying to overcome one of the obvious flaws of centralisation – the fact that new ideas and practices may not bubble up effectively.

“You can start things moving by saying ‘AI does this specific thing’ and then develop that specific use case. Alternatively, you can just give people a general-purpose tool – that’s why we’ve given people Copilot. They come back and tell you ‘I’ve done this’, and you say, ‘I didn’t realise that was possible’. And then you find all these hidden pools of creativity and productivity,” says the dealer’s head of digital.

The challenge for central teams is to find a way to encourage, monitor and learn from these dispersed experiments. This European dealer does it by keeping an eye on Copilot usage.

When it comes to data security and privacy, we need to make sure clients are comfortable and are opting in
Senior technologist at a large US bank

“If people aren’t putting prompts into a prompting tool, they’re not getting value out of it. The more prompts they’re submitting, the more they’re doing stuff,” says the dealer’s head of digital. “So, when you see an area doing loads and loads of prompting, you go over there and you find stuff you never thought of,” he adds.

In the survey, the 42 respondents that have a central AI R&D team were also asked a follow-up question about the level of involvement in markets applications. Again, there was a notable split. A quarter report regular involvement, a little over half report occasional involvement and the balance say their central team was not involved at all.

These findings hint at some of the drawbacks of centralisation – that it may not reflect or support the priorities of the businesses.

Hobbled optimists

On resourcing, the survey divided respondents into two groups – those who believe the AI opportunities for markets businesses are substantial (67%), and those who believe they are limited (34%) – then divided each of these groups again into those who believe their firm’s current level of resourcing is appropriate and those who don’t.

The biggest group will come as no surprise to anyone who has worked in a large organisation: 43% believe the AI opportunity is substantial and their firm is under-resourced.

The large European dealer’s head of digital has encountered these people. “We run these surveys internally, asking people whether they think we’re investing enough and whether it will be transformational. On the whole, the investment bank believes AI will be massively transformational, but we’re not investing enough. The corporate bank believes it will be quite transformational, but we’re not investing enough,” he says.

But it’s not a universal belief, according to the Risk.net survey. Just under a third of respondents are AI pessimists – they believe the opportunities are limited – and say their firm has recognised this with appropriate resourcing. A quarter are AI optimists – they see substantial opportunity – and also believe they are appropriately resourced.

The smallest group are those who believe opportunities are limited and resourcing is too generous – just two respondents selected this combination.

Related to resourcing, the survey also asked whether respondents typically buy or build their applications, drawing a split response: a third said they build their own technology where possible, while 26% said they buy where possible. Policy is shifting at many other firms – 20% are moving away from building towards buying, while 15% are moving in the opposite direction.

The question on governance also drew a mixed response. Almost half say it is too soon to say whether their firm’s approach is supportive. The remainder are evenly split between those who feel it is helping them make progress and those who feel it’s holding them back.

Compliance says ‘no’

Regulation is another force that could hold a firm back in its adoption of AI. There are AI-specific rules that might get in the way, or legacy regulation – around model risk, for example – that is difficult to copy across to this new breed of technology.

Somewhat distinct from that are a whole suite of compliance and risk concerns that have less to do with specific, existing rules and more to do with amorphous fears – around intellectual property and copyright, or use of client data, for example.

 

 

The opening question asked whether regulatory or compliance concerns were holding back AI adoption for each respondent’s business. A little more than half said they were not encountering these obstacles. The remainder was split between those who were being held back significantly (21%), and those being held back somewhat (27%).

The mix of these responses is likely to change as the applications being pursued also change. If the survey – and the discussion it has triggered – are correct, then many firms are currently playing around with relatively simple, productivity-focused tools or continuing to develop existing applications of reinforcement learning. They are giving staff access to co-pilots, they are summarising their own research, they are refining quoting and hedging algorithms. These applications are less likely to run into a new regulatory thicket.

But, as applications become more ambitious – and break new ground – regulation and compliance is likely to be more constraining, predicts the large US bank’s senior technologist.

Sell-side firms have to do an even better job of being very precise about what we are allowed to do
Lee Smallwood, Citi

“There are several hurdles to clear before we start using AI to make client-specific trading and pricing decisions – some of those relate to technology, some relate to regulation. When it comes to data security and privacy, we need to make sure clients are comfortable and are opting in. And we need more regulatory clarity there, as well as more mature risk management and controls,” he says.

This focus on client data was also seen in the survey. When asked to identify the types of rules that were holding them back – and given the ability to select multiple options – almost 77% of respondents selected data privacy laws contained in regulations like the Gramm-Leach-Bliley Act in the US and the General Data Protection Regulation in the European Union.

The second-most popular choice (64%) was model risk management regulations, which include the nearly 15-year-old US supervisory guidance, SR 11-7. In third place (61%) were general AI laws and standards, including the EU’s AI Act. Copyright laws came in a distant fourth, chosen by 36% of respondents.

The survey’s final question asked whether there are any rule changes or clarifications that would make it easier to adopt AI – a free-text question that few respondents completed in detail. One said they were “most concerned about client confidentiality when using AI” and also noted that regulatory clarification was needed. Another said it would be “helpful to have clear regulatory guidance” on permissible use of the technology, but did not specify further. Two others said no clarifications were needed. And two said there were no real regulatory constraints.

In the absence of black-and-white regulatory constraints, though, some firms may be self-regulating and constraining themselves.

Lee Smallwood, head of markets innovation and investment with Citi, says banks generally appreciate AI needs to be handled with care – they don’t want to be running ahead of internal controls or external rules and later find they have strayed into areas that are off limits.

“My perception is that actually everyone is very understanding of both the regulatory and the risk and control requirements for using these tools. So, individuals are generally proactive in ensuring they don’t do things with AI that they’re not supposed to do. That’s encouraging. But it also means sell-side firms have to do an even better job of being very precise about what we are allowed to do.”

Editing by Louise Marshall

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here