Simon Axon, Financial Services Industry Director, International at Teradata, explores the tension between innovation and regulation in the finance sector.

Last year, the European Union (EU) launched the world’s first artificial intelligence (AI) regulation, the EU Artificial Intelligence Act, which came into force on 1 August 2024. The act introduced a clear set of risk-based rules for AI developers and businesses regarding specific use cases of AI, from high-risk to minimal risk. When it comes to financial services, the sector naturally falls under the high-risk category due to the collection and use of a vast amount of personal data. 

In response to the regulation and ahead of the next EU AI Act deadlines coming up in August, financial institutions must re-evaluate and revamp their strategies to ensure compliance. Failing to do so can result in severe financial penalties of up to €35,000,000, or up to 7 percent of the organisation’s total worldwide annual turnover, whichever is higher.

But remaining compliant is also not enough. Especially in the current landscape, customer demands are increasingly urging the sector to accelerate innovation to provide more automated and personalised solutions for them. So, how can the sector find the right balance between remaining compliant and innovative and how can they use AI to achieve this? 

AI innovation in banking

Financial services organisations must constantly innovate and digitally transform their operations to stay competitive and be able to address evolving customer demands. The advancement of AI has supported that, and has enabled banks to transform their operations and offerings. 

Internally, banks have seen AI automate workflows, empower quicker decision-making and service delivery. These organisations can leverage the technology to streamline routine tasks, so employees can dedicate more of their time to higher value and complex projects. AI can also help financial services organisations create more efficient processes around how data is collected, stored, and analysed. Data is a critical element in ensuring banks can innovate their products and services to accurately and efficiently address customer demands.

Understanding and analysing customer data can also allow banks to predict future needs based on past actions with high precision. These capabilities are particularly helpful when it comes to identifying customer behaviour to offer more tailored and proactive services, which drives better service. Additionally, through predictive modelling, AI can be used to safeguard customers against fraud by having a better insight into their potential risks and it can automatically flag and block any suspicious transactions. This highlights how banks can go further to protect their customers and their own reputation.

It has been really positive to see how the sector is leveraging AI to innovate from deploying technology in their operations to enhancing customer experiences and risk assessment. However, what banks must be cautious about is how they can still innovate while remaining compliant to strict regulations to see the fruits of their labour

Opportunities and challenges with AI

Regulations such as the EU AI Act emphasises the importance of advanced technology being safe and ethical whilst encouraging innovation. In order to achieve this, organisations need to ensure the data AI uses is not biased or outdated. This means that the industry needs much stronger human oversight and control. The human layer within the AI systems ensures ethical operations and is crucial for compliance with the Act, particularly for high-risk AI applications. 

Along with the concerns on biased information, there is also regulatory uncertainty around AI hallucinations. In this scenario, the AI tool produces seemingly correct answers that are actually false. These hallucinations arise from data which developers used to train the model as the model itself is not intelligent. This significantly undermines the trust that end users place in the model and its outputs.

Thriving in a regulatory environment

It is crucial that developers train their AI models on data that is reliable, transparent, and trusted, especially with the tighter regulations around the technology. High-quality, complete, and ethically sourced data must serve as the foundation for these models. 

Additionally, enhancing AI literacy and training is essential. This should clearly clarify the distinction between current capabilities and the future potential of AI. Educational programmes should also extend beyond those that use the technology in the bank to their customers as well. As such, this will enable customers to better understand how the technology functions, its applications by the bank, and its impact on them.

In an era where ethical use of AI in banking and financial services is no longer an option or a nice to have, the organisations that thrive will be those that drive safe and ethical innovation. These businesses must be able to successfully balance their aspirations for innovation with the stringent regulations to protect themselves and their customers against harm. In doing so, they will not only adhere to the legal standards but will also be seen as trustworthy and forward-thinking players in the financial services sector.

  • Fintech & Insurtech

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.