Banks must loosen up on ChatGPT use – risk chiefs

Risk Live: ‘Shadow use’ and inability to attract new hires mean restricting access to GPTs is untenable

AI-access-denied
Risk.net montage

The hard line taken by many banks towards staff access to popular third-party artificial intelligence tools is showing signs of wavering.

When the latest generation of GPT-based large language models (LLMs) emerged last year, most banks moved to impose initial blanket bans on their use by most staff, with data privacy and copyright concerns the most commonly cited reasons.

Senior risk managers are now acknowledging publicly what many long feared in private: simply blocking tools, rather than

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

The changing shape of risk

S&P Global Market Intelligence’s head of credit and risk solutions reveals how firms are adjusting their strategies and capabilities to embrace a more holistic view of risk

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here