Gabe Hopkins, Chief Product Officer at Ripjar, examines the upsides and downsides of integrating generative AI into the compliance process.

Through complex algorithms, Generative AI (GenAI) creates content including imagery, music, text, and video – all on demand. It can also be used to perform tasks and process data. This makes tedious tasks more manageable and, therefore, allows the technology to save considerable time, effort, and money. This is transformational for many industries, especially for teams looking to boost operational efficiency and drive innovation.

Compliance as a sector has traditionally shown hesitancy when it comes to implementing new technologies. In general, compliance takes longer to acquire and roll out new tools due to caution over perceived risks. Many compliance teams will not be using any AI, never mind GenAI. However, this hesitancy also means that these teams are missing out on significant benefits. At the same time, other less risk-averse industries are experiencing the upside of having the technology implemented into their systems. 

Therefore, it’s time that compliance teams look for ways to leverage all forms of AI, specifically GenAI. Nevertheless, this needs to move forward in safe and tested ways, without introducing unnecessary risk. 

Dispelling fears

GenAI is a new and rapidly developing technology. It’s only natural therefore that many compliance teams have some reservations surrounding how they can be applied safely. Particularly, teams tend to worry about sharing data. This information might then be used as part of training and become embedded into future models. It is also unlikely that most organisations would share data across the internet without following strict privacy and security measures. 

When thinking about the options for running models securely or locally, teams are likely also worried about costs. Much of the public discussion surrounding generative AI has focussed on the immense costs of preparing the foundation models. 

Additionally, model governance teams within organisations will worry about the black box nature of models. This casts a spotlight on the potential for models to embed biases towards specific groups. Once embedded at the foundational level, this bias can be difficult to spot. 

However, the good news is that there are ways to use GenAI to overcome these concerns. This can be done by selecting the right models which provide the required security and privacy. Then, compliance teams need to fine-tune those models within a strong statistical framework to mitigate biases. 

In doing so, organisations will need to find the right resources. That could mean data scientists or qualified vendors. Once they do, these resources can be leveraged to support them. However, this may also prove challenging. 

Challenges compliance teams may face

Despite initial hesitancy, analysts and other compliance professionals stand to gain massively by implementing GenAI. For example, teams in regulated industries such as banks, fintechs and large corporations are often faced with huge workloads and resource constraints. Depending on the industry, teams may be responsible for identifying a range of risks – including sanctioned individuals and entities, adjusting to new regulatory requirements and managing huge quantities of data – or a combination of all three.

For compliance professionals, the task of reviewing huge quantities of potential matches can be incredibly monotonous and prone to error. If teams make mistakes and miss risks, the potential impact for firms can be significant – both in terms of financial and reputational consequences. It is not surprising that organisations can struggle to hire and retain staff, leading to a serious skills shortage among compliance professionals as a result. 

So what can organisations in regulated and other industries do to tackle issues of false positives and false negatives associated with modern customer and counter-party screening? It seems GenAI may hold some of the answers.

False positives are where systems or teams incorrectly flag risks, while false negatives are where we miss risks that should be flagged. These errors may come from human error and inaccurate systems, but they are hugely exacerbated by challenges such as name matching, risk identification and quantification. All of which can be mitigated with the right implementation of AI tools including GenAI without sacrificing accuracy.

Using Generative AI in compliance

GenAI can be implemented in various useful ways to improve compliance processes. The most obvious is in Suspicious Activity Report (SAR) narrative commentary. Compliance analysts must write a summary of why a specific transaction or set of transactions is deemed suitable in a SAR. Well before the arrival of ChatGPT, forward thinking compliance teams have been using technology based on its ancestor technology to semi-automate the writing of narratives. It is a task that newer models excel at, particularly with human oversight.

The ability to produce summarised data can also be useful when it comes to tasks such as Politically Exposed Persons (PEP) or Adverse Media screenings. These processes involve conducting reviews or research on a client to check for potential negative news and data sources. Importantly, these screenings allow companies to identify potential risks, preventing the company from becoming implicated or face reputational damage as a result.

When deployed correctly, summary technology can enable analysts to review match information far more effectively and efficiently. With any AI deployment, it is essential to consider which tool is right for which activity and the same is true here. Merging GenAI with other machine learning and AI techniques can provide a real step change. This involves blending both generalised and deductive capabilities from GenAI with highly measurable and comprehensive results available in well-known machine learning models.

For instance, traditional AI can then be used to create profiles. These profilesdifferentiate between large quantities of organisations and individuals, separating out distinct identities. The techniques move past the historical hit and miss processes that saw analysts carry out manual searches. The results of these searches were limited by arbitrary numeric limits. Once these profiles are available, GenAI supercharges analysts even further. 

Final thoughts 

Results from the latest innovations are showing that GenAI powered virtual analysts can achieve, or even surpass, human accuracy across a range of measures. Concerns about accuracy will still likely slow its adoption.

However, it is clear that future compliance teams will benefit heavily from these breakthroughs which will enable significant improvements in speed, effectiveness and the ability to react to new risks and constraints.

  • Digital Strategy

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.