Sponsored by ?

This article was paid for by a contributing third party.More Information.

Elevating financial crime compliance and data management through AI

Elevating financial crime compliance and data management through AI


Today, artificial intelligence (AI), process automation and strategic data management can effectively combat financial crime. However, the power can be in the hands of both good and bad actors 

In a webinar convened by Risk.net and Appian, experts delved into the pivotal role of technology in enhancing compliance monitoring for financial institutions. They discussed how advanced AI applications and automation can streamline know your customer (KYC), anti-money laundering (AML) investigations and fraud detection. They also explored how improving data management practices, and ensuring a comprehensive approach to mitigating risks, is essential.
 

The panel

  • Guy Mettrick, Industry vice-president, financial services, Appian
  • Adrian Harvey, Partner, data analytics, KPMG
  • Jay Krish, Head of data governance for financial crimes compliance, State Street
  • Moderator: Philip Harding, Commercial editor, Risk.net


Identifying and combatting financial crime, such as fraud and money laundering, is a colossal and complex task for banks and other capital markets firms. Failure to implement effective controls and robust compliance programmes can leave firms exposed to multiple risks, as well as incurring hefty regulatory fines and sanctions.

The regularity with which financial crime events occur, as well as the variety of activities, suggests that banks have work to do and cannot afford to stand still withstanding this threat. This webinar discussion assessed the changing shape of financial crime, how firms are adapting their strategies and the potential of new technologies to drive a step change in efficiency and effectiveness. This article presents the key takeaways from the webinar.
 

Technology catalysing a new landscape

Financial crime and fraud present huge challenges today, but technology provides the opportunity to help manage the complexity of organisations and deliver far better outcomes – more quickly and more efficiently – across KYC, AML and fraud.

In the past few months, for example, a large Canadian bank had set aside $450 million against investigations by Canadian and US regulators over failures to detect alleged drug-money laundering, acknowledging that its AML programme was unable to effectively monitor, detect, report and respond to suspicious activity.

Also in the news recently was that the US Federal Reserve issued a $186 million fine to another large financial institution for its slow progress in addressing AML deficiencies related to a money laundering scandal, highlighting particular concerns about the bank’s risk and data management.

The rise of AI and automation in crime is becoming increasingly relevant to our everyday lives. This technology allows bad actors – even those with minimal hacking skills – to purchase software capable of infiltrating devices for a low cost. This poses a significant threat, especially when considering the vast number of digital channels connected through the Internet of Things, a situation that is becoming increasingly difficult to manage.

Moreover, the use of AI is growing, with generative AI (GenAI) being used to create deepfakes and exploit voice-based systems to bypass established security protocols, gaining access to sensitive data. Fraudsters are also leveraging AI to craft more convincing emails and develop software that mimics legitimate banking systems. The advancements in technology over recent years have only exacerbated these risks.

In addition, the proliferation of cryptocurrencies and digital assets has made it easier for individuals to transfer funds, further complicating the landscape of cyber crime.

“Technology is driving change and there is increasing regulation forcing organisations to adapt. The challenge is that, within those complex financial services organisations, there are a lot of disparate legacy systems and technology that have evolved over the years to perform specific pieces of work only,” said Guy Mettrick, industry vice-president, financial services at Appian.

He added that technology had advanced to the point where AI can be leveraged in a much more productive way across various functions. With the vast amounts of data available in financial services, such as transaction records and entity resolutions, AI can provide valuable insights into the relationships between individuals and organisations. “This insight can be used to generate meaningful alerts, which can then be managed through an effective process to evaluate, assess and make informed decisions.”

Adrian Harvey, partner, data analytics at KPMG, said that the opportunity for AI has developed significantly: in particular, strategies around storing and maintaining data in a more efficient manner, and enabling AI to interact with it to uncover hidden risks, are now occurring.

“While the reality is that most firms have the required controls and systems in place to manage financial crime risk, the gaps always exist in the grey areas where either individuals who are within the firms are acting on behalf of criminal organisations to open up gaps, or organisations that are exploiting unknown vulnerabilities,” Harvey added. “It’s very hard for organisations to always be predictive and be at the forefront of that.”
 

AI deployment and potential

The panel emphasised that there are several innovative uses of AI around entity resolution, making connections between data, individuals and organisations through a vast number of different datasets. This enables organisations to identify and address any entity issues that might exist – a problem that is slated to grow.

“Being able to detect patterns in transaction data, multiple payments processes, being able to learn from that and then get better at it is key. We see a lot of machine learning technology applied to things like screening capabilities, to make better connections between names and events taking place or sanctions lists,” said Mettrick.

There is now a growing use of GenAI to create highly realistic videos and documents that could potentially be used for malicious purposes. Detecting such fabricated content with similar AI technology is an imminent development.

Many retail biometric authentication systems rely on moving images to verify identity during payment authorisation. These systems will need to rapidly adapt to an era of automated video generation, prompting significant advancements in AI for this purpose.

The future will likely involve considerable efforts to counteract the innovative use of AI by individuals attempting to commit financial crimes, the panellists said. This will result in a perpetual arms race, where new technologies are continually developed and then countered by equally advanced measures.

“What we notice in the industry is that teams integrate the human component to guide and manage AI in recognising the patterns that can generate relevant alerts. At the same time, humans are tasked with assessing and categorising these alerts as they arise. Although this creates a productive feedback loop that enhances the underlying models further, it’s crucial to investigate ways to optimise the dispositioning process and detect false positives without needing human involvement,” said Jay Krish, head of data governance for financial crimes compliance at State Street.


AI risks and future evolution

“Quite often, one of the big pitfalls is that firms end up doing the wrong thing, but more efficiently or with slightly nicer-looking technology. The reason for that is not because they’ve implemented the technology badly. It’s because the actual requirements, and the understanding, of what they need to do are hidden within the organisation,” explained Harvey.

Panellists said firms must deploy technology that will adapt, grow and change to meet their business, risk and regulatory needs.

“One of the main risks linked to AI, as with any emerging technology, lies in the data component," said Krish. “As an industry, we need to address several challenges, including data quality, data lineage, data misuse, insufficient data protection and unauthorised access. Furthermore, factors such as data drift and concept drift should be incorporated into the continuous monitoring process, which can aid in establishing a robust model governance framework.”

The future evolution of AI, according to the panel, will be driven by two primary factors. First, regulation: definitions around financial crime are expanding, creating new requirements for organisations to account for these changes. Second, innovation from bad actors: they are leading the way in money-laundering tactics, using readily available new technologies. Organisations need to improve their ability to react, capture and understand this evolving environment, implementing processes and capabilities to detect and stop these activities.

While innovation can help us get ahead of the curve, history shows that bad actors, with vast amounts of money on the dark web, always find new ways to exploit weak spots. Fixing one area often reveals vulnerabilities in another.

Continuous adaptability is crucial, with technology playing a key role in detection, process improvement and operational efficiency for financial services firms. Cost-effective and efficient management is more important than ever. Organisations can’t just throw more people at the problem – costs are much higher now than they were 10 or 15 years ago. It’s about making the best use of available tools to be as effective and efficient as possible in these functions.

Panellists acknowledged that there are efficiency improvements that firms have achieved from introducing AI. “I’ve seen organisations who have fully digitised their data capture approach. Where they’ve managed to isolate those data requirements, they’ve automated 70–80% of that data flow and data management process,” said Harvey.
 

In summary

As emphasised by the panel, advanced technology is as much in the hands of good actors as they are in bad. Constant innovation is key.

AI will play a vital role in the future, but it relies on the existing data within an organisation to deliver valuable insights.

Additionally, combining AI with process automation is essential to transform these insights into meaningful, value-added actions. Having all these elements in place, within a flexible platform that allows for adaptation and change, positions organisations well to meet current needs and future requirements.

What is certain is that failing to align systems to be ready for AI and next-generation technology is sure to make organisations lose the competitive edge.
 

 

The panellists were speaking in a personal capacity. The views expressed by the panel do not necessarily reflect or represent the views of their respective institutions.

The panellists’ comments are based on the context of this discussion and should not be taken as formal advice or guidance outside of this setting. Participation in this panel did not imply endorsement or recommendation of any specific company, product or service mentioned. The information provided is intended for educational and informational purposes only, and should not be considered as professional advice. Any statements made during or by the panel should not be attributed without explicit written consent.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here