ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

Responsible AI and the fight against financial crime

Linked InTwitterFacebook

Gabriel Hopkins at Ripjar explains how Artificial Intelligence can supercharge the compliance sector

 

A year on from ChatGPT’s launch, the enduring hype surrounding the Artificial Intelligence landscape continues to persist. Recent events – from OpenAI’s changing leadership to the UK’s AI Safety Summit – underscore AI technologies’ rapid evolution, AI researchers are rumoured to make even more breakthroughs within weeks.

 

But what does all the hype mean for the industries that want to benefit from AI but are unsure of the risks?

 

Understanding AI adoption

It’s possible that some forms of machine learning – what we used to call AI – have been around for a century. Since the early 1990s, those tools have been a key operational element of some banking, government, and corporate processes, while being notably absent from others.

 

The uneven adoption has generally been related to risk. For example, AI tools are great for tasks like fraud detection. Algorithms can do things that analysts simply can’t by reviewing vast swathes of data in milliseconds. And that has become the norm, particularly because it is not essential to understand every decision in detail.

 

Other processes have been more resistant to change. Usually, that’s not because an algorithm couldn’t do better, but rather because – in areas such as credit scoring or money laundering detection – the potential for unexpected biases to creep in is unacceptable. That is particularly acute in credit scoring when a loan or mortgage can be declined due to non-financial characteristics.

 

While the adoption of older AI techniques has been progressing year-on-year, the arrival of Generative AI, characterised by ChatGPT, has changed everything. The potential for the new models – both good and bad – is huge, and commentary has divided accordingly. What is clear is that no organisation wants to miss out on the upside. Despite the talk about Generative and Frontier models, 2023 has been brimming with excitement about the revolution ahead.

 

AI’s role in financial crime prevention

AI can play a pivotal role in the financial crime space by detecting and preventing fraudulent and criminal activity. Efforts are generally concentrated around two similar but different objectives. These are thwarting fraudulent activity – stopping you or your relative from getting defrauded - and adhering to existing regulatory guidelines to support Anti-Money Laundering (AML), and Combatting the Financing of Terrorism (CFT).

 

Historically, AI deployment in the AML and CFT areas has faced concerns about potentially overlooking critical instances compared to traditional rule-based methods. Within the past decade, regulators initiated a shift by encouraging innovation to help with AML and CFT cases.

 

Despite using machine learning models in fraud prevention over the past decades, adoption in AML/CFT has been much slower with a prevalence for headlines and predications over actual action. The advent of Generative AI looks likely to change that equation dramatically.

 

One bright spot for AI in compliance over the last five years has been in customer and counterparty screening, particularly when it comes to the vast quantities of data involved in high-quality Adverse Media (aka Negative News) screening where organisations look for the early signs of risk in the news media to protect themselves from potential issues.

 

The nature of high-volume screening against billions of unstructured documents has meant that the advantages of machine learning and artificial intelligence far outweigh the risks and enable organisations to undertake checks which would simply not be possible otherwise.

 

Now banks and other organisations want to go a stage further. As Generation AI models start to approach AGI (Artificial General Intelligence) where they can routinely outperform human analysts, the question is when, and not if, they can use the technology to inform decisions and potentially even make the decisions unilaterally.

 

Navigating AI safety in compliance

The 2023 AI Safety Summit was a significant milestone in acknowledging the importance of AI. The Summit resulted in 28 countries signing a declaration to continue meetings to address AI risks. The event led to the inauguration of the AI Safety Institute, which will contribute to future research and collaboration to ensure its safety.

 

Though there are advantages to having an international focus on the AI conversation, the GPT transformer models were the primary focus areas during the Summit. This poses a risk of oversimplifying or confusing the broader AI spectrum for unaccustomed individuals. There is a wide range of AI technologies with hugely varying characteristics.

 

Regulators and others need to understand that complexity. Banks, government agencies, and global companies must exert a thoughtful approach to AI utilisation. They must emphasise its safe, careful, and explainable use when leveraged inside and outside of compliance frameworks.

 

The way forward

The compliance landscape demands a review of standards for responsible AI use. It is essential to establish best practices and clear objectives to help steer organisations away from hastily assembled AI solutions that compromise accuracy. Accuracy, reliability, and innovation are equally important to mitigate fabrication or potential misinformation.

 

Within the banking sector, AI is being used to support compliance analysts already struggling with time constraints and growing regulatory responsibilities. AI can significantly aid teams by automating mundane tasks, augmenting decision-making processes, and enhancing fraud detection.

 

The UK can benefit from the latest opportunity. We should cultivate an innovation ecosystem receptive to AI innovation across fintech, RegTech, and beyond. Clarity from government and thought leaders on AI tailored to practical implementations in the industry is key.

 

We must also be open to welcoming new graduates from the growing global talent pool for AI to fortify the country’s position in pioneering AI-driven solutions and integrating them seamlessly.

 

Amid industry change, prioritising and backing responsible AI deployment is crucial for the successful ongoing battle against all aspects of financial crime. 

 


 

Gabriel Hopkins is Chief Product Officer at Ripjar

 

Main image courtesy of iStockPhoto.com

Linked InTwitterFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543