Chaithanya Krishnan, Head of Consulting Group, SLK Software, explores the potential of AI to help banks fight a new wave of fintech fraud.

AI adoption by banks and financial institutions isn’t a simple story. As a major, recent U.S. Treasury Department report pointed out, “Financial institutions have used AI systems in connection with their operations, and specifically to support their cybersecurity and anti-fraud operations, for years.” But those traditional forms of AI and existing risk management frameworks, the report also notes, may not be adequate to face emerging threats born of generative AI. What’s new is the massive amount of convincing synthetic content generative AI can create — automatically constructing fraudulent identities, behavior patterns, whole banking histories, and cyberattack schemes. 

Fraudsters are going on the offensive with Generative AI, while defensive algorithms race to keep up with the new, supercharged forms of attack. 

A 2024 survey of banking professionals revealed a knowledge gap that doesn’t help matters: Only 23% reported that they definitely knew the difference between traditional AI and generative AI. And while a large bank like Goldman Sachs has over 1,000 developers using generative AI to help write code and summarise documents, those are different functions than directly combating fraud — and smaller banks don’t have that horsepower for any function. What’s more, MiTek’s latest research disturbingly found that a full third of surveyed risk professionals estimate that up to 30% of financial transactions may be fraudulent, that 42% of banks identified onboarding new customers as a process particularly susceptible to fraud, and that “nearly 1 in 5 banks struggle to verify customer identities effectively throughout the customer journey.”

Fraud on the rise in three key areas: mobile payments, account takeover, and cyberattacks

As generative AI becomes more sophisticated, the tools used by fraudsters are becoming more complex and targeting many aspects of financial services. The sector is especially likely to see AI-enabled increases in mobile payments and transfer fraud, account takeover fraud, and cyberattacks resulting in financial crime

Mobile payments and transfer fraud

Mobile banking rates have increased, and so has fraud perpetrated from mobile devices, rising from 47% in 2022 to 61% in 2023. Consumer Reports, evaluating the mobile banking apps of five of America’s largest banks as well as five newer digital banks, found that the apps are not offering adequate fraud prevention measures based on four criteria, including real-time monitoring, fraud notifications, scam education on their website, and fraud education for the app generally. Earlier this year, the Federal Trade Commission reported that payment fraud losses in 2023 increased 14% year-over-year and amounted to over $10 billion, with bank transfers or payments being the top method of loss.

AI-powered systems offer hope, specifically in detecting mobile payments and transfer fraud in progress. AI algorithms can analyse vast amounts of transactional data to detect patterns indicative of fraudulent activity within banking and mobile payment platforms. For instance, AI can identify unusual spending patterns, geographic anomalies, or suspicious login attempts in real time. Banks are already using AI-powered inspection, image analysis, and intelligent, configurable fraud decision engines to combat check fraud. This type of fraud is often executed on mobile devices and projected to reach a stunning $24 billion globally this year. By continuously learning from historical data and adapting to new fraud trends, AI-powered systems leveraging pattern recognition and predictive machine learning can identify and flag potentially fraudulent transactions before they are completed.

Account takeover

As generative AI can accurately reproduce a person’s voice, writing style, and image in photos and even video, fraudsters are stealing identities and fabricating new ones to engage in account takeover (ATO), fake account creation, and fraudulent account logins. TransUnion recently found that “nearly one in seven newly created digital accounts are suspected to be fraudulent.” Financial institutions can use AI algorithms to fight back by analysing user behavior and transaction patterns — including deviations from normal login times, locations, device types, and transaction amounts. These allow them to identify anomalies that may indicate an account takeover attempt. By monitoring user activities in real time, AI systems can detect suspicious behavior and trigger authentication challenges or account lockdowns to prevent unauthorised access. But the growth of this kind of attack requires equally aggressive growth in real-time detection and mitigation AI implementations.

Cyberattacks

AI-enabled cyberattacks that result in financial crime are on the rise, too. For example, generative AI chatbots and other tools are helping hackers perpetuate social engineering designed to infiltrate accounts and trick employees of financial institutions. The U.S. Treasury Department has urged banks that are moving too slowly to take action to address these cyber threats. AI-powered systems and algorithms can analyse network traffic, scrutinise email communications to identify phishing attempts, detect malware signatures and patterns indicative of ransomware activity or BEC scams, and predict potential vulnerabilities in financial systems based on historical data.

Collaboration is key to fighting fraud in the AI era

Typical applications of AI in financial fraud have been atomic in nature, but a shift is underway, where AI-driven fraud collusion networks are emerging to ramp up massive attack campaigns. We’ll need even more sophisticated AI algorithms collaborating to identify large-scale fraud schemes across multiple financial institutions. Now and in the future, banks must collaborate on many levels in order to keep pace, or outpace, criminals. 

Cross-enterprise collaboration among AI model and technology teams, legal and compliance teams, and others will lead to shared advantage towards fraud prevention. However, the sharing of fraud information among financial firms is currently limited. While it doesn’t yet exist, a clearinghouse has been proposed that would allow the rapid sharing of fraud data and that can support financial institutions of all sizes. Smaller institutions have remained at a disadvantage and more negatively impacted by the absence of fraud-related data sharing because they often do not have the broad set of client relationships and the wider base of historical fraudulent activity data that can be used to develop and train AI models. Fraudsters know this and know that smaller institutions are more vulnerable.

Working through AI adoption challenges 

As banks work to speed up their AI collaboration and adoption efforts to combat fraud — and find ways to take full advantage of generative AI to complement other kinds of predictive AI and machine learning — they face three major kinds of challenges, shared by enterprises in other industries: reliability, domain context, and business integration. We know that, as fast as development is happening, large language models (LLMs) are not yet fully “enterprise-ready.”

Successful implementation of generative AI solutions requires reliability, predictability, and explainability of output. That means hallucinations and bias are simply not acceptable in production environments. Banks must be able to offer evidence of an action or decision to auditors and to maintain a good reputation with customers. AI models also must account for organisational context, consuming vast data that helps them “understand” an organisation’s internal processes, unique history and particularities. Banks must also integrate models into business workflows in order to tie them to real value creation.

Five AI strategies banks should adopt to counter fraud

Banks can and should take action by adopting specific strategies to prevent and mitigate fraud. First, they can use predictive modeling and anomaly detection to identify potential anomalies in customer transactions by analysing their transaction history, location data, spending habits, and other data. Any deviations from the norm may be flagged for additional scrutiny. For example, sudden large purchases and transactions from unusual locations or at odd hours may indicate a problem. Analysis of bank statements can help predict future spending patterns based on past behavior.

Biometric authentication is another strategy banks should integrate into their processes. Financial institutions can use biometrics like fingerprints, facial and voice recognition, and behavioral parameters powered by AI to significantly reduce the risk of unauthorised access, thereby reducing fraud. 

AI can also improve document analysis. An AI-driven system can improve the accuracy of analysing customer documents used for identification, which helps detect forgeries.

Banks should leverage AI for automated threat response as well. By automating tasks with AI like blocking suspicious transactions, contacting customers for verification, and notifying law enforcement in case of suspected fraud, banks can sharply speed up response times and enable loss reduction.

Finally, banks should use AI for data integration and enrichment. By integrating data from various sources, including internal databases, social media, and public records, banks can quickly build a comprehensive view of a customer’s identity and minimise fraud risk.

Final thoughts

Consumers look to banks to be stalwarts of protection and stability in rapidly changing times. Economic and social systems depend on it. Getting in front of fraud in the AI era is a complex endeavor for banks, but an imperative.

It’s only through smart and collaborative AI adoption that they can face the threats AI-powered fraud poses, protect consumers and improve their experience, and remain competitive for the long term.

  • Fintech & Insurtech

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.