Our cover star Shadman Zafar, Founder & CEO of Vibrant Capital, is building a CIO-led model for enterprise transformation. Vibrant Capital is an operator-led investment and company-building platform focused on scaling AI in the real economy. “We don’t spray investments across hundreds of AI startups. We curate a portfolio with purpose – selecting companies that solve the real mission-critical problems CIOs face in scaling AI adoption.”
FNB: Redefining Data Science in Commercial Banking
We also hear from Yudhvir Seetharam, Chief Analytics Officer at South Africa’s First National Bank (FNB) on a data science journey characterised by curiosity, culture and the drive for a competitive edge. “Ours is a holistic approach focusing on the customer,” he explains. “Understanding the context of each customer journey and then using that context so that when we interact with you, we’re able to drive the right conversation with the right customer, at the right time, through the right channel and for the right reason. These ‘five rights’ make our interactions with clients more impactful.”
Virginia Farm Bureau: An Enterprise CIO’s Journey
Shifting focus to the world of insurance at the Virginia Farm Bureau, we spoke withan Enterprise CIO at a complex mission-driven organisation. As he approaches retirement, Patrick (Pat) Caine reflects on his career as a CIO and the centennial of an organisation renowned for resiliency, collaboration, commitment to a greater cause, diversity and service to its members. “In my role as CIO, I’ve always been that person who connects the dots between business needs and technology execution. Virginia Farm Bureau is digitally relevant, collaborative, and well‑positioned for the future.”
Mastercard: Protecting Trust in the Digital Economy
Michele Centemero, EVP Services at Mastercard Europe explains why promoting awareness, stronger collaboration and data-sharing, and continued innovation of payments ecosystems, will be critical in reducing the impact of scams and protecting trust in the digital economy. “The combination of AI, robust identity controls and open banking can help protect consumers from scams, whether across card and account‑to‑account payments or in fraudulent account openings.”
Thales on AI Security: How FinServ’s Budget Priorities Signal a Boardroom Shift
Todd Moore, Global VP – Data Security Products at Thales, reveals why making AI security a boardroom priority today, will help firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation. “Balancing AI’s opportunity and risk means embedding security at every stage, from design to deployment and ongoing monitoring.”
Paymentology: The First Live AI-Agent Payment Is a Test for Credit Infrastructure
Thomas Benjaminsen Normann, Product Director at Paymentology, dissects the future for agentic payments and the progress still to be made. “Agentic payments demand something more granular: a clearer account of who or what acted, under what limits, and with what right to create a liability on the customer’s behalf.”
Also in this issue, we hear from Publicis Sapient, on why asset managers must redesign their enterprise for AI-driven decision intelligence; learn from Bitpace why the most resilient payments infrastructure will be the one with the most adaptability; rank the AI maturity of 12 of the largest payments networks in the latest Evident AI Index; and round up the key FinTech events and conferences across the globe.
Michele Centemero, EVP Services, Mastercard Europe on why promoting awareness, stronger collaboration and data-sharing, and continued innovation of payments ecosystems, will be critical in reducing the impact of scams and protecting trust in the digital economy
SHARE THIS STORY
As our world becomes faster, smarter and more interconnected, scammers are evolving in parallel, developing increasingly sophisticated ways to exploit people’s trust. By harnessing new technologies and behavioural insights, they are refining their methods to appear ever more credible and convincing.
While attacks on systems continue, today’s fraudsters are increasingly targeting people, often relying on psychological manipulation to achieve their goals.
Understanding Social Engineering
Many modern scams fall under the umbrella of social engineering,which isthe use of deception and emotional manipulation to influence a person’s behaviour.
In the digital world, cybercriminals use these tactics to build false trust, create urgency or fear, and ultimately trick people into sharing confidential information or taking actions that can cause financial harm to themselves or their employer.
Recent European industry data indicates that social engineering-related fraud and authorised push payments (APPs) – where victims are tricked into sending money to fraudsters posing as legitimate payees – now account for a growing share of overall scam losses[1].
This is directly impacting a growing number of consumers, with the majority of people saying they’ve experienced some form of scam or fraudulent attempt to capture their personal information highlighting why awareness and vigilance are critical for people of all ages.
Education is the First Line of Defence
Protecting consumers and businesses from malicious activity is a priority, and it starts with awareness. When people understand how scams work, they’re more likely to spot the warning signs before it’s too late and be empowered to protect themselves against fraudsters.
Three of the most common social engineering scams to watch out for are:
Imposter fraud – Criminals pose as trusted organisations (such as banks, retailers, or government bodies) to pressure victims into sharing personal or financial details. Research indicates over half (53%) of European consumers have been targeted via phone or voice call scams, with social media scams affecting around two in five people, and tech support impersonation tricking roughly one in three.*
Phishing – Fraudulent emails, texts, or messages that are designed to look legitimate, often urging immediate action like clicking a link or resetting a password, leading victims to disclose sensitive information or install malicious software. Nearly three in five (58%) have received phishing emails or fraudulent text messages (63%) and QR code scams are on the rise, impacting nearly a quarter of Europeans.*
Romance or honeypot scams – Scammers build emotional relationships over time, gaining trust before exploiting it for financial gain. These types of attacks are also widespread, with one in four people (24%) encountering fake profiles, requests for money, or online relationships that lead to financial exploitation. These scams hit younger generations hardest, with 40% of Gen Z and 35% of Millennials affected, compared with 21% of Gen X and 11% of Boomers.*
How Businesses Can Protect Consumers from Scams
With fraudsters increasingly using AI to commit more sophisticated, larger scale attacks, businesses and banks should also consider how they deploy technology to protect customers from bad actors.
The combination of AI, robust identity controls and open banking can help protect consumers from scams, whether across card and account‑to‑account payments or in fraudulent account openings.
Looking at identity controls specifically – take the example of continuous identity verification, a fraud prevention measure that verifies the user is who they claim to be throughout the entire lifecycle journey. This helps to prevent scammers from opening or taking over accounts to apply for credit, create ‘mule’ accounts or impersonate others.
Behavioural biometric data is often used as part of this and can be used to analyse how a user interacts with their device – from typing patterns to on‑screen movements – to flag unusual behaviour.
More in depth, AI powered transaction analysis can also help banks and financial institutions to stay ahead of payment threats. It provides banks with the intelligence needed to detect and stop payments to scammers, using AI and a network-level view of account‑to‑account transactions to enable intervention before funds leave an account.
Staying Ahead of an Ever-Evolving Threat
As social engineering tactics continue to evolve, staying ahead requires a combination of intelligent technology, consumer education, and proactive action from businesses and financial institutions.
While no single measure can eliminate risk entirely, greater awareness, stronger collaboration and data-sharing, and continued innovation of payments ecosystems will be critical in reducing the impact of scams and protecting trust in the digital economy.
*Source: This study was conducted by The Harris Poll on behalf of Mastercard from September 8 to September 25, 2025, among 5000+ consumers in the following European markets: EUR: France (n=1,005), Germany (n=1,002), Italy (n=1,016), Spain (n=1,005), UK (n=1,004)
Mastercard: Transforming the Fight Against Scams
Innovation – Our advanced AI-powered Identity insights examine digital footprints and assess unique patterns to detect risk and flag suspicious activity indicative of scams.
Collaboration – We collaborate across industries, partners and organizations worldwide to secure the digital ecosystem, ensuring payments are safe for all. Combating the growing threat of scams demands a collective effort.
Education – We work with and through our collaborators to provide knowledge and tools that help people protect themselves and their loved ones from scams, while also working to destigmatise the experience of being a victim.
$12.5bn in losses from U.S. consumer reported online scams in 2023
$486bn in global losses from scams and bank fraud schemes in 2023
22% YoY growth in U.S. consumer scam losses suffered in 2023
From sender to recipient, we vigilantly monitor accounts and transactions for any elevated scam risk
Identity insights – Provides actionable identity insights and risk scores for businesses to improve identifying their good customers from the scammers creating “mule” accounts or impersonating someone else with a false identity.
Transaction patterns – Flags suspicious activity across the money movement flow to prevent payments to scammers before it is sent through the real-time analysis of transaction elements.
Account confirmation – Enables account validation to confirm account ownership and validate identity details in real-time through our open banking capability, which draws on the safe exchange of consumer-permissioned data to facilitate frictionless and secure payments.
Todd Moore, Global Vice President, Data Security Products at Thales, on why making AI security a boardroom priority today, will help firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation
SHARE THIS STORY
Financial Services organisations are responsible for some of the biggest growth in the global economy. Equally, they’re some of the most vulnerable. Like many other sectors, they’re racing to embrace AI, but with adoption comes new security risks.
According to Thales’ Data Threat Report: Financial Services Edition 81% of FinServ organisations are now investing in GenAI-specific security tools, with nearly a quarter using newly allocated budget. This surge in funding marks a turning point: AI security has moved from being an IT concern to a boardroom priority.
The fact that new budget lines are being carved out specifically for AI security signals a fundamental shift in corporate strategy. Boards increasingly recognise that protecting AI systems is as critical as safeguarding payment rails or core banking infrastructure. For an industry built on trust, resilience, and regulatory compliance, this investment wave shows how central AI has become to both risk management and competitive growth.
Balancing AI Innovation and Security
While FinServ organisations are aware of the security risks AI poses, they’re also seizing upon the opportunities it presents. The report has found that in 2024, FinServ businesses outpaced the broader market in AI deployment, leading in enabling employees to use AI and ahead in AI integration, which has continued into 2025. Additionally, 45% say they’re in the ‘integration’ or ‘transformation’ phases of their GenAI journey, compared to just 33% across wider industries.
AI’s ability to accelerate services, automate processes, and analyse data at scale makes it an exciting prospect, especially in the financial sector. This makes securing AI systems a priority for FinServ organisations, with increased GenAI integration reflecting developing organisational maturity and progress beyond experimentation.
The Risk
Yet the scale of opportunity is matched by the scale of challenge. AI systems require vast amounts of structured and unstructured data to conduct analysis and make recommendations.
For FinServ organisations, this often includes highly sensitive customer and transactional information, proprietary algorithms, and records bound by strict regulatory oversight. The risk is not only about whether AI systems themselves are secure, but whether the data they’re working from is accurate, as well as whether their adoption inadvertently creates new routes to data exposure and exfiltration.
Businesses need a clear strategy to fully understand how AI models are operating within their IT infrastructure, the applications they’re interacting with, and the data they’re accessing and pulling from.
The Response
Balancing AI’s opportunity and risk means embedding security at every stage, from design to deployment and ongoing monitoring. Newly allocated budgets for AI security, with nearly a quarter of FinServ firms making such investments, show how central AI has become to board-level strategy. These investments move firms beyond reactive fixes to proactive frameworks that evolve with the technology. AI security is no longer just an IT concern, it’s a strategic priority requiring collaboration between security, compliance, and business leaders. By factoring risk into early planning, organisations can align innovation with responsibility and build resilience for the long term.
Pioneering AI Security
Building on investment in AI-specific security is only the beginning. As scrutiny intensifies, the firms that will lead are those that treat AI security as integral to business strategy, not a bolt-on layer. Success will require visibility into how models behave, continuous validation against emerging risks, and adaptive controls that evolve with the threat landscape.
The financial services organisations that embed these safeguards into their core infrastructure will protect sensitive data as well as setting a benchmark for resilience and trust in an AI-driven economy. By making AI security a boardroom priority today, these firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation.
Thales: AI is the New Insider Threat
Thales 2026 Data Threat Report Finds 70% of Organisations Rank AI as Top Data Security Risk
Data security has taken centre stage as the success of enterprise AI initiatives increasingly hinges on consistent, controlled access to proprietary organisational data sources. The 2026 Thales Data Threat Reportexamines the complex calculus that organizations must undertake to enable innovation while securing their most valuable asset – their data.
This research was based on a global survey of 3,120 respondents fielded via web survey with targeted populations for each country, aimed at professionals in security and IT management.
Lee Fredricks, Director – Solutions Consulting, EMEA at PagerDuty, on why technology leaders should see 2026 as a time for operational resilience to shift from ambition to accountability
SHARE THIS STORY
Technology leaders should see 2026 as a time for operational resilience to shift from ambition to accountability. In 2025, too many cloud services outages and disruptions took place across the public and private sectors, and now regulatory, technological and cultural pressures are converging to say that enough is enough.
Outages often translate into broader repercussions for the organisation, including revenue impact, customer churn, share price pressure and potentially regulatory reporting obligations. Operational metrics must now be discussed alongside financial KPIs at the board level. C-suite leaders understand accountability, especially within the very regulated financial sector.
DORA’s First Birthday
It’s now been one year since the implementation of the Digital Operational Resilience Act, or DORA, introduced by the EU to strengthen the digital resilience of financial institutions. By now, organisations have had time to consider moving from mere compliance to creating a competitive edge from their investments.
Enterprise tech leaders are in the middle of a balancing act. They’re managing ongoing modernisation and transformation initiatives while navigating multi-jurisdictional regulatory scrutiny. At the same time, they face constant pressure from the board and must meet evolving customer needs—all competing for immediate attention. The stakes have never been higher. Operations teams are no longer viewed as a back-office IT function. Their success in keeping the organisation running and driving revenue is now a board-level concern.
For organisations today, IT is business delivery.
A year of DORA has seen organisations make the shift from focusing solely on mere compliance to setting meaningful demonstrable testing, third-party risk visibility and strictly mandated incident reporting timelines. Financial firms have lessened their exposure to risky situations. Payments providers aren’t only reliant on a single cloud region or SaaS supplier, or unable to provide evidence of real time incident response efforts and auditable logs after a disruption.
One benefit of these overall systemic improvements is enhanced supply chain accountability. Financial institutions and their technology partners are both liable for potential penalties and reputational risk, which makes it highly critical that they can prove their resilience capabilities.
Nevertheless, operational resilience is a continuous discipline. A fragmented incident response can expose firms to regulatory and reputational risk again and again if not addressed systemically. As such, many organisations are looking toward AI agents as part of a move towards ‘no-touch’ operations.
From Autonomy to Self-Healing
Under set policies, autonomous agents can handle incident response and operational tasks, such as detection, triage and remediation. AI agents deployed in operations may become the backbone of L1 (first contact) and L2 (more skilled) support. Contrast this with the traditional, reactive, ticket-driven model of IT. The industry can move much faster and with a higher successful close rate. Leveraging intelligent automation reduces mean time to detection/resolution and KPIs around lower incident volumes reaching L3. Additionally, it can lead to improved service availability percentages. Well integrated agents that actually support existing operations teams also help manage the issues around talent shortages faced by many organisations.
A typical incident lifecycle with agentic processes includes several stages depending on the model, but can be summarised as: Anomaly detected, correlated with recent deployment, a remediation script triggered and a human notified if set thresholds were breached. Such no-touch operations are golden in any sector, but particularly with industries such as digital banking and retail, where peak traffic periods demand near-instant response and poor customer experience is a powerful motivator for users to instantly change providers.
IT Standardisation
In addition, consider standardisation as part of strategic infrastructure best practices. There is a role for central operations clouds and operational ‘golden paths’ as solid foundations for reliable operational scale and dependability. Standardisation enables consistent, scalable operational excellence especially across large, distributed enterprises. ‘There is one way and it is the right way’ can be a great time and stress saver for operational teams – particularly if a regulatory notification and clear evidence is required.
For example, a global bank might define a single golden path for deploying customer-facing applications with pre-approved monitoring, incident response workflows, and regulatory reporting templates built in. In an outage, teams follow the same process and automatically capture the evidence required for regulators, avoiding confusion, delays, and compliance risk.
All of these possibilities take us to an exciting new place for an evolved set of developer and operational roles. When organisations enable AI to reshape daily engineering work away from manual firefighting and low-value work it frees headspace and time for developers and engineers to move into more architectural thinking and intelligent oversight of automated systems. These augmented teams will be empowered to manage simple situations instantly and devote more time and attention to the more difficult issues – the edge cases and the strategic necessities.
Enabling Agentic AI
Using another lens, businesses with agentic IT operations capabilities support their current talent, extending their reach and the speed of their response. The winning organisations will be those who deploy agents strategically, freeing up humans for that higher-value work – i.e. L3 expert support – and setting new standards for operational excellence that customers can rely on. Ideally this means making commensurate investment in existing people, training and organisational change management. A culture of continual upskilling and forecasting that points humans to where they make the best impact will be just as important as the autonomous tech tools working alongside them.
Autonomous agents allow many new services, and one of those can be described as self-healing operations. This evolution of the operations world is where predictive detection, automated remediation and embedded resilience all coalesce. With an autonomous process of testing, maintenance and remediation, organisations can focus on finely measuring improved customer trust. They can also enjoy the productivity and revenue benefits of high business continuity and availability.
AI is still a new technology, and many are legitimately concerned with the concept of autonomous agents. There is a need for clear guardrails, audit trails and explainability in automated remediation, and many technology partners have invested in their ability to support across these areas. Moreover, firms must maintain direction with policy-driven automation rather than uncontrolled autonomy, particularly in regulated industries.
Mandate Operational Excellence
This year is very likely to reward organisations that treat operational resilience as core to their business strategy. Those investing in automation, standardisation and governance will set the pace for their industries in an AI-enabled and increasingly autonomous world.
Regulators are already expanding their scrutiny and reliability expectations beyond financial services firms. Across the world, jurisdictions are increasingly looking to strengthen their economies and digital services in particular through resilience and cybersecurity measures. At the same time, agentic operations, and the organisational performance benefits they support, will rapidly become table stakes technology in all sectors. Inevitably, customers will judge brands on digital reliability as much as price or product features when evidence of outages are a click or a headline search away.
Start now. Audit internal incident response maturity, review the potentially complex web of third-party IT dependencies and identify where automation makes clear business sense. While resilience is an investment in compliance, it is also critical to ensure customer trust and future stability.
Jamil Jiva, Global Head of Asset Management at Linedata, on why the next chapter of AI-driven finance will be shaped not just by technology, but by creativity
SHARE THIS STORY
Beyond Data: Where AI Finds Unexpected Inspiration
The discussion about training AI largely focuses on concerns that accessible, human-generated data is limited and may soon run out completely. If this is the case, how can technology that depends on a seemingly endless stream of inputs to iterate, test, and adapt deliver the results we expect? AI relies on structured, high-quality data to thrive, but what happens when we run out of spreadsheets and financial models to train AI? We need new data sources to ensure it continues to learn, adapt, and deliver accurate insights. Video games stand out as offering some of the richest, most expansive, and complex environments for AI training.
At first glance, video games and financial operations seem to belong to entirely separate worlds. However, AI connects these domains, with models leveraging virtual-world training to tackle real-world financial tasks. Financial documents such as credit agreements and tax returns are often convoluted, unstructured, and labour-intensive to process. Therefore, AI designed to interpret such data must possess strategic reasoning, real-time adaptability, and advanced pattern recognition. So, could video games be the ideal training ground?
Contrary to popular belief, gameplay can significantly improve how people think, learn, and solve problems. The abilities required to excel at video games closely reflect the skills AI systems must acquire today.
Levelling Up: What Virtual Worlds Teach Machines
Practice leads to proficiency, a principle that applies to both humans and AI. Interestingly, many of the most significant advances in AI development have emerged not from conventional data training, but from taking creative approaches. Games push AI to emulate human thinking and sharpen its statistical intuition.
These game-trained models are neither expensive nor heavily reliant on resources, and they sidestep the issue of data scarcity. As a result, they are actively shaping the future of financial intelligence. The examples below offer a clear demonstration of the potential of gameplay.
Virtual Economies: Lessons from World of Warcraft
World of Warcraft, with millions of players interacting in an immersive and dynamic world, features an economy that closely mirrors real-world financial systems, complete with inflation, supply and demand cycles, and fraud risks. The game even inspired one of the most renowned epidemiological studies: when the in-game ‘Corrupted Blood’ plague spread unpredictably, scientists used it as a model for real-world pandemic simulations.
Financial models depend on vast, interconnected data networks, much like the economy in World of Warcraft. Organisations employ AI to continuously monitor patterns, detect anomalies such as fraud or misstatements, and optimise data extraction for financial reporting, mirroring the way AI analyses virtual economies.
Urban Chaos: GTA V and Real-World Simulation
While Grand Theft Auto (GTA) V is famous for its open-world chaos, researchers have leveraged its traffic systems and non-player character behaviours to train AI for applications such as self-driving cars, crime pattern recognition, and urban planning. At its heart, GTA provides a platform for AI to process vast amounts of unstructured data in real time.
Similarly, financial institutions manage millions of data points from a wide range of sources. Their AI tools must automatically extract insights, classify information, and normalise complex formats. GTA serves as a controlled yet intricate environment for simulating scenarios, enabling AI to optimise for real-world tasks through ongoing feedback loops.
Sandbox Creativity: Minecraft and Adaptive Thinking
Minecraft provides a sandbox environment where AI learns through exploration. OpenAI even trained an AI to play Minecraft by watching YouTube tutorials, closely mimicking the way humans learn. Similarly, any AI used by financial institutions must be able to self-learn from new document types and structures, adapting just as a Minecraft AI learns to survive.
Reinforcement learning, where AI improves based on feedback, is a key element of intelligent document processing. Thanks to its vast scalability and dynamic, hierarchical environments, Minecraft serves as an ideal setting for navigation and repeated feedback loops, helping models develop domain-flexible reasoning.
Multiplayer Mayhem: Dota 2 and the Art of Teamwork
Dota 2 stands out as one of the most complex competitive games ever created, presenting AI with challenges in real-time decision-making, strategic coordination, and adaptability. OpenAI Five, trained on the equivalent of 45,000 years of gameplay within just 10 months, managed to defeat renowned, professional human teams. As anyone who has mastered StarCraft knows, tactical adaptability is essential for gaining the upper hand.
Financial institutions operate in environments that are just as dynamic as the shifting levels of a video game. Market conditions, regulations, and data formats are in constant flux. AI must be able to adjust to new document structures, handle missing information, and navigate edge cases, much like AlphaStar adapts to an opponent’s unpredictable strategies.
From Pixels to Profits: Bringing Game Logic to Finance
Whether to streamline operations, mitigate risks, or make informed decisions in today’s data-intensive financial landscape, AI has the potential to fundamentally transform financial offerings, delivering personalised and evolving experiences that foster understanding and combine seamlessness with regulatory compliance.
Yet AI does not simply require more data from which to learn; it needs better data. Video games offer near limitless, pre-built, highly complex digital worlds where AI can test hypotheses, simulate scenarios, and refine decision-making models. By utilising these unique environments, AI is challenged to enhance its speed, accuracy, and efficiency.
The world of video games has many lessons we can learn when building AI, and given AI’s remarkable ability for transferable learning, it makes sense to leverage these pre-trained models to power essential financial workflows. It is more than just document processing; it is thinking, and the same intelligence that enables AI to defeat world champions in Dota 2 is now driving the next generation of financial AI solutions.
The next chapter of AI-driven finance will be shaped not just by technology, but by creativity. By embracing unconventional data sources such as the immersive complexity of video games, industry leaders will unlock new possibilities for personalisation, security, and customer engagement.
Richard Doherty, Head of Wealth & Asset Management, Publicis Sapient, on how asset managers must redesign their enterprise for AI-driven decision intelligence
SHARE THIS STORY
The asset management industry is entering a structural inflexion point. The first wave of AI focused on improving productivity through copilots and automation. The next wave will fundamentally reshape how decisions are made, executed, and governed across the enterprise. This is not a technology upgrade. It is an operating model shift.
Despite significant investment, many firms remain trapped in fragmented AI experimentation. A majority are yet to realise meaningful economic returns from AI, not due to lack of capability, but due to a failure to redesign how intelligence is applied across the organisation. The gap between ambition and outcome is not a technology problem. It is a structural one.
From Automation to Decision Intelligence
The industry conversation has evolved. The question is no longer whether to adopt AI, but how to scale it across the enterprise. However, most firms are still approaching this challenge through the lens of automation, identifying tasks that can be executed faster or at lower cost. This delivers incremental value, but does not address the underlying constraint: the structure of decision-making within the organisation.
Traditional operating models are built around sequential workflows. Work moves from function to function: research, compliance, operations, and distribution, each dependent on the previous stage. This creates latency, duplication, and fragmentation. Agentic operating models shift the focus from tasks to decisions.
Instead of asking “Which processes can we automate?”, leading firms are asking: “Which decisions can be augmented or owned by intelligent systems?”
This shift enables organisations to move from sequential workflows to parallel decision systems; from human-led analysis to AI-assisted reasoning; from periodic insight to continuous intelligence. The result is not a marginal improvement. It is a step-change in how the enterprise operates.
The Pressures Driving Change
This transformation is not happening in a vacuum. Asset managers face mounting structural pressures: margin compression driven by fee pressure and passive competition; rising operational complexity from regulation and product proliferation; and advisor capacity constraints that limit scalable growth. Agentic operating models directly address all three.
By automating complex workflows, rather than individual tasks, firms can significantly increase advisor and analyst capacity without proportional cost increases. Parallel decision systems reduce the time required to launch products, respond to market events, and deliver client insights. This compresses cycles from months to days. Continuous monitoring of guidelines, portfolios, and operational processes reduces exposure to regulatory breaches and operational failures.
These are not theoretical benefits. They represent measurable improvements in cost-to-serve, time-to-market, and operational resilience.
Not all Intelligence is the Same
To scale AI effectively, organisations must recognise that not all problems require the same type of intelligence. Enterprise AI operates across three distinct layers, and conflating them is one of the primary reasons AI initiatives fail to scale.
Deterministic systems execute predefined rules with complete consistency. They are essential for functions where there is zero tolerance for error, trade validation, settlement processing, and regulatory reporting. If a business outcome must be identical every time, deterministic logic remains the correct approach.
Predictive systems use historical data to forecast outcomes. Applied in areas such as portfolio risk modelling, fraud detection, and client churn prediction, they generate probabilities and insights, but they do not interpret context or make decisions independently.
Agentic systems operate where problems require interpretation, judgment, and contextual understanding, investment guideline interpretation, regulatory document analysis, portfolio insights, and client communication. These systems can reason across complex information, generate insights, and take action within defined boundaries.
The ‘Different but Valid’ Dilemma
A critical challenge in adopting agentic systems is understanding how they behave. Traditional software produces identical outputs. Agentic systems produce reasoned outputs.
This introduces what I call the ‘different but valid’ dilemma. An agent may take a different reasoning path from a human and arrive at a different, but still correct, conclusion. This variability is not an error. It is inherent to reasoning systems.
The real risk lies in hallucination, outputs that are not grounded in data or evidence. Managing this requires organisations to clearly define where variability is acceptable. All AI-driven processes sit on a spectrum: deterministic actions with no variability (trade execution), predictive actions with controlled variability (risk scoring), and agentic actions with higher variability (investment insights).
Leading firms design systems where agents perform reasoning, deterministic systems enforce execution, and humans retain oversight on high-consequence decisions. This balance enables both flexibility and control.
The Operating Model Shift
The most significant change is not technological; it is organisational. Traditional models are built on functional workflows. Agentic models are built on coordinated decision systems.
Consider what launching a new investment product looks like under each model. In a traditional model, it involves sequential handoffs between teams, compliance reviews the guidelines, operations configures the systems, and distribution drafts the client narrative. Each stage waits for the last.
In an agentic model, intelligent systems operate in parallel: compliance agents interpret guidelines, operations agents configure constraints, distribution agents generate client narratives, and governance agents validate outputs. This orchestration compresses timelines, reduces friction, and enables continuous decision-making. It represents a fundamental redesign of how work is performed.
Governance: the Foundation for Trust
Trust is the prerequisite for scaling AI. Without it, adoption stalls, not because the technology fails, but because the organisation cannot adequately explain or defend the decisions it makes.
Leading firms implement governance models built on three principles. First, explainability: every decision must be traceable and auditable. Second, authority boundaries: agents operate within clearly defined limits. Third, human oversight: high-consequence decisions remain under human control.
Regulatory expectations will continue to evolve, but one principle remains constant: organisations must be able to explain how decisions are made.
Scaling AI is a Leadership Challenge
Executives must take a deliberate approach across four areas:
Define the intelligence model: map business problems to deterministic, predictive, or agentic systems.
Build the foundation: invest in data, infrastructure, and orchestration capabilities.
Redesign the operating model: shift from workflows to decision systems.
Implement governance to ensure transparency, control, and compliance.
Start with high-value use cases and expand rapidly across the enterprise. The firms that act now will establish a structural advantage in cost, speed, and decision quality. Those that do not risk being constrained by legacy operating models that cannot scale with the demands of modern markets.
The Question is not if, it is Who
The industry is not simply adopting new technology. It is redefining how decisions are made. The firms that succeed will not be those that deploy AI tools in isolation. They will be those who design the right form of intelligence for each problem, redesign their operating models around intelligent systems, and scale agentic capabilities across the enterprise.
This shift is already underway. The question is no longer whether it will happen. The question is which firms will lead, and which will be forced to follow.
Martijn Gribnauis, Chief Customer Success Officer at Quant, on why Agentic AI will redefine financial services
SHARE THIS STORY
A recent Google Cloud survey showed that only 13% of finance organisations are currently using agentic artificial intelligence. This number needs to, and will rise when you consider that 88% of financial leaders are seeing ROI from generative AI already. Agentic is the next and most advanced evolution of artificial intelligence the world has ever seen.
Agentic AI is not on the way. It is here and already reshaping how forward-leaning financial institutions operate. In 2026, for IT and finance leaders to build an insurmountable competitive lead they must deploy agentic AI in every area where it can safely and effectively create value. The institutions that hesitate will find their business models under threat from familiar competitors and newcomers alike.
Reinvention of Core Processes
Agentic AI is poised to reinvent core financial processes. Bookkeeping, record maintenance, and period-end close are nearing complete automation. Month-end processes that once required late-night, stress-filled marathons will evolve into continuous, largely automated cycles. IT teams will no longer spend evenings on high alert waiting for failures.
This shift also frees IT leaders, finance teams, and operations functions from monotonous repetitive tasks. Instead of focusing on system uptime and manual reconciliation, they will collaborate with the C-suite on strategic initiatives that drive growth and revenue.
Understanding Why Adoption Is So Low
Despite the promise of Agentic AI, there is understandable caution. Some 80% of organisations have reported ‘risky behaviour’ from AI agents, and in the world of finance that is an alarming number. Finance is one of the most regulated, risk-averse sectors in the world. The fear of losing control remains the primary reason so few in the industry have embraced Agentic AI.
Loss of control and fear of catastrophic error
Financial leaders fear that an autonomous system could go ‘off script’, mis-route payments, misinterpret rules, or inadvertently cause compliance breaches. In finance, even small errors can trigger major financial or regulatory consequences.
Security and data privacy concerns
Large AI models require huge quantities of sensitive data. Organisations worry about breaches, cyber-attacks, or manipulation. An AI agent with improperly configured permissions could, in theory, execute fraudulent transactions or expose confidential customer information.
Bias and fairness risks
If AI agents make decisions using incomplete or fragmented data, they risk perpetuating or amplifying bias. At scale, biased decision-making can undermine customer trust and expose firms to legal and regulatory challenges.
Regulatory ambiguity and audit difficulty
Regulators are still determining how to govern agentic AI. Some organisations fear that early adoption could unintentionally violate rules or create future audit vulnerabilities.
These fears are legitimate, but not insurmountable.
Tackling the Adoption Barriers: A Practical Blueprint for Finance Leaders
To capitalise on Agentic AI’s immense potential, leaders must take a structured approach grounded in business value, security, and trust.
1. Start With Clear, Measurable ROI and Efficiency Gains
In finance, adoption accelerates when decision-makers see proof of value.
Start by automating repetitive processes. Agentic AI can handle tasks like data entry, reconciliation, invoice matching, and initial fraud checks faster and more accurately than humans. This leads to reduced operational overhead as automation lowers labour costs, shortens processing times, and reduces error rates. Demonstrating these savings through case studies or internal pilots is critical to changing minds.
AI agents can enable revenue growth by analysing huge data sets to identify new investment opportunities, optimise trading strategies, and generate personalised product recommendations. Each of these capabilities directly impacts top-line growth.
2. Strengthen Risk Management and Compliance Through AI
Agentic AI will improve risk management when deployed responsibly. This starts with real-time fraud detection. AI agents can monitor transactions continuously, identifying patterns that suggest fraud long before traditional systems would detect an anomaly.
Continuous monitoring is also incredibly helpful when it comes to compliance. AI agents excel at ensuring adherence to KYC and AML regulations. They can automatically maintain audit trails, identify missing documentation, flag anomalies, and escalate issues instantly.
Enhanced stress testing and scenario modelling can both be completed via Agentic AI. It can simulate complex market environments more dynamically than legacy tools, providing deeper insights into vulnerabilities and improving resilience. When showcased and presented in this context, agentic AI becomes a risk-reduction tool in the eyes of decision makers.
3. Directly Address Security and Trust Concerns
Trust is the cornerstone of adoption. Implement enterprise-grade security architecture that includes encryption, secure APIs, strict access controls, and continuous monitoring of agent behaviour. And, use explainable and transparent AI systems (XAI) so your finance teams understand the reasoning behind decisions. XAI helps provide interpretable outputs that support auditability and regulatory compliance.
Start small with a controlled, low-risk pilot. A proof-of-concept in a non-critical workflow helps teams understand the technology, gather evidence, and build internal support before scaling. Produce numbers based reporting that speaks the language of the people who make the decisions. Show, don’t just tell them how agentic will move the business forward.
4. Highlight the Competitive Advantage
Agentic AI adoption is not just an efficiency upgrade. It is a competitive imperative. AI agents create faster innovation cycles by accelerating product development, service delivery, and operational improvements.
They also provide superior customer experience. From instant account servicing to personalised financial recommendations, Agentic AI delivers the speed, personalisation, and convenience customers expect. Plus, it scales exponentially. No matter how many people call in at the same time, an agentic agent will answer immediately. Agentic AI reduces up to 86% of time spent in complex workflows that were traditionally handled only by people. This will be huge in getting ahead of your competition.
5. Build Momentum Through Internal Champions
Adoption increases when respected leaders advocate from within. Mid-level managers, AI-literate staff, or members of the C-suite who understand the technology can serve as champions. Use them and their beliefs to drive alignment, communicate benefits, and counter misconceptions. The more people from different departments and levels of the organisation that talk up the technology, the more likely you are to get buy-in.
Your Time is Now
Agentic AI will redefine financial services. The organisations that act today will build capabilities, insights, and competitive advantages that late adopters will not be able to replicate. Finance leaders must begin asking where agentic AI can support their business, where it can remove friction, where it can unlock growth, and where it can transform operations. The firms that act now will lead the industry. Those that hesitate will not get the chance to catch up.
The only remaining question for finance organisations is not whether agentic AI will change the industry, but how quickly they choose to deploy it.
Dr Megha Kumar, Chief Product Officer and Head of Geopolitical Risk at CyXcel, on whether our risk and regulatory frameworks and institutional cultures can keep pace with Agentic AI
SHARE THIS STORY
Within the next couple of years, Agentic AI is likely to progress from early stages of operation to be fully embedded within systems. Its expansion will be subtle rather than spectacular. It will integrate steadily into enterprise platforms, logistics networks, compliance workflows, cybersecurity operations centres and executive decision-support tools. Processes will move faster, operating expenses will decline and performance indicators will trend upward.
Yet these visible improvements mask a deeper challenge. The regulatory exposure, data governance pressures and erosion-of-trust risks associated with Agentic AI are being misjudged.
Unlike earlier AI applications designed primarily to generate outputs – whether text, imagery, or predictive insights – agentic systems are built to act. They sequence decisions, draw from multiple data environments, initiate consequential processes and function at scale with differing levels of human supervision. In sandbox environments this can seem contained and controllable. Over extended periods in live environments, however, sustained oversight, traceability and effective governance become significantly more complex.
Evolving Operational Complexity
There are two key challenges that businesses must address.
First, how do organisations monitor what agentic systems are doing once deployed? These systems evolve through updates, integrations and retraining and they interact with new data environments.
Second, how do you ensure responsible behaviour throughout the lifecycle? Regulators, policymakers and customers will likely expect firms to shift from compliance assurance to risk assurance and demonstrable evidence of trust and transparency.
The prevailing assumption is that human oversight will mitigate these risks. Human in the loop or human over the loop has become the default reassurance. In practice, however, that assumption breaks down far faster than many anticipate.
When a system works 95 per cent of the time, human reviewers limit their scrutiny. Behavioural science tells us that automation bias and complacency occur when automated systems are high-performing. Employees often become validators of AI outputs rather than critical examiners. The diligence gap widens gradually and then suddenly.
Facing Up to Difficult Questions
How do you incentivise employees to remain diligent checkers when the system mostly ‘works’? And how much time does effective oversight actually require? True review is not a cursory glance at a dashboard. It involves interrogating assumptions, validating inputs, checking context and assessing downstream consequences. In many cases, meaningful oversight may take nearly as long as performing the original task manually. When checking becomes more costly than doing the job yourself, pressure to ‘trust the system’ intensifies.
And what happens to accountability when oversight exists on paper but not in practice? Governance documentation may show layered review structures, escalation pathways and audit processes. Yet if humans are functionally disengaged, responsibility becomes dispersed. When errors surface, organisations may struggle to attribute fault – was it the model design, the data, the integrator, the operator or the reviewer who signed off without fully scrutinising?
Regulators are only beginning to grapple with these realities. In jurisdictions such as the European Union, the EU AI Act introduces risk-based obligations, documentation requirements and human oversight provisions. These are important steps, however, the operationalisation of those requirements in dynamic, agentic environments remain untested at scale. Compliance on paper will not automatically translate into resilient governance in practice.
Addressing the Trust Challenge
Beyond regulatory exposure, there is a broader trust challenge emerging.
As Agentic AI systems scale across industries, they will generate vast volumes of automated outputs – reports, communications, risk assessments, content, decisions and transactions. If errors or manipulations spread through interconnected systems, confidence in digital outputs may erode.
In geopolitically sensitive contexts, this has profound implications. Agentic systems interacting with external data sources could amplify disinformation, introduce biased datasets or make decisions based on manipulated inputs. The speed of automation may outpace the speed of verification. Trust, once diluted, is difficult to restore.
Data protection risks will also intensify. Agentic systems frequently require broad access privileges to perform tasks effectively. They may access internal databases and personal data and interact with third-party platforms. Each interaction creates potential exposure points. A single misconfiguration or prompt injection attack could trigger cascading consequences across systems.
The next phase of AI adoption will not simply amplify productivity: it will amplify regulatory, legal and reputational risk. This moment therefore demands serious scrutiny before agentic AI becomes deeply embedded in business infrastructure.
The Moment for Action has Arrived
So, what should organisations be doing now?
To begin with, organisations need to look past superficial, tick-box compliance. Effective governance cannot live solely in policy documents – it must function in day-to-day operations. This means investing in continuous monitoring capabilities, robust audit trails and real-time anomaly detection tailored specifically to Agentic AI behaviours.
In parallel, incentive structures should be redesigned. Meaningful human oversight will not happen if it is treated as secondary to speed or output. If employees are expected to provide meaningful review, organisations must allocate time, training and authority accordingly. Performance metrics should reflect risk management responsibilities, not just output rate.
Clear lines of accountability are equally important. Senior leadership and boards should determine who carries ultimate responsibility for outcomes produced by agents. Where third-party vendors are involved, responsibilities must be contractually and operationally defined. Incident response mechanisms should be rehearsed in advance, rather than presumed to work when pressure is high.
Expertise must also be integrated across functions. Legal, risk, compliance, cybersecurity, data protection and operational teams should be engaged from the outset. Deploying Agentic AI is not simply a technical upgrade – it reshapes the organisation’s risk profile.
Finally, resilience demands deliberate stress-testing. Leaders should examine not only pathways to success but how models fail at scale. How would the organisation respond if a system update embedded systemic bias, if an integration vulnerability enabled unauthorised activity or if automated actions eroded customer confidence? Rigorous scenario exercises, however uncomfortable, are essential to building genuine preparedness.
As Agentic AI advances, Risk Management Should Match its Pace
None of this is an argument against adoption. Agentic AI presents meaningful productivity improvements and the potential for sustained competitive differentiation. Organisations that deploy it with discipline and foresight may secure a measurable advantage. The danger lies not in adoption itself, but in pursuing acceleration without knowing the risks and putting the right guardrails in place.
The coming two years are critical for businesses. Before these systems become deeply embedded in core processes, organisations have an opportunity to shape the control environment around them. However, once agentic systems are fully embedded, retrofitting controls will be far more difficult and costly. Leaders must therefore treat this period as a design phase for oversight, not merely a race for competitive advantage.
Agentic AI is advancing rapidly. The defining question is whether our risk and regulatory frameworks and institutional cultures can evolve just as quickly.
As companies pour billions into developing their own AI tools, Fayola-Maria Jack, Founder and CEO of Resolutiion, argues that many are forgetting what worked well in the early tech era, confusing ownership with innovation
SHARE THIS STORY
Back in the very early days of computing, organisations rarely hesitated to buy the hardware and software they needed to modernise. Now we’re deep into the AI age. Many organisations are deciding the best approach to adopting the technology is to take building it into their own hands.
Many of the more traditional companies, like big banks, have publicly stated that they’re developing their own AI tools in house. Meanwhile, corporate investment in AI reached £191 billion ($252.3 billion) in 2024 and is only likely to have risen since..
Yet, the challenges of internal AI development are becoming abundantly clear. A recent report from MIT found that 95% of AI pilot projects failed to deliver any discernible financial savings or uplift in profits. It also found companies purchasing AI tools succeed about 67% of the time. Meanwhile, internal builds succeed only one-third as often.
Why do companies feel they need to build their own AI tools?
Those statistics alone show buying AI from specialised vendors and building partnerships is often the wiser choice. But, with a handful of traditional businesses deciding to lean the other way, it begs the question: why are these companies not only initially choosing the in-house route, but also persisting with it despite low success rates?
The instinct to ‘build’ is rooted in legacy thinking – and to some extent, a naivety around what makes AI solutions special. Traditional enterprises have long equated ownership with control: control over systems, data, and perceived competitive advantage.
When AI entered the scene, many executives applied that same logic, assuming that building in-house equated to ownership, at the heart of innovation. But this overlooks a fundamental truth that is unique to AI – AI isn’t another IT system you can own and stabilise. It evolves exponentially, not linearly. It demands constant retraining, rapid iteration, and deep specialisation – all at odds with the traditional corporate IT environment, which is built for stability and compliance, not experimentation and speed.
Are companies really investing in innovation?
Another common belief is that buying is seen as conceding leadership to outsiders. While building feels safer politically, signalling ‘we’re investing in innovation’. Ironically, though, that safety is often an illusion that leads to slower progress and higher long-term cost. But again, there is deep irony if talent is outsourced to India, or another foreign jurisdiction, on the basis of cheap labour.
The exact same dynamic plays out internally, too. AI initiatives are career-defining projects for senior technology leaders and they attract budget, visibility, and prestige. Once a build programme is launched, it’s politically difficult to pivot, even in the face of poor performance. As a result, the build strategy often survives by narrative rather than by evidence.
Underpinning all of this is the institutional belief that ‘our data is unique’ – that their data will deliver proprietary insight and competitive advantage. In reality, most internal data is messy, siloed, and outdated. It reflects years of practices that are often misaligned with best practice, and therefore should never be used to train AI. Instead of building capability, many organisations end up building complexity.
Increased Caution in Regulated Sectors
Alongside these misbeliefs, regulatory caution and data residency also play into the decision to build in-house; especially in regulated sectors like finance, healthcare, and government. Here, enterprises typically believe that adopting third-party AI tools may expose sensitive data to external environments they cannot fully control. Perhaps this is because data protection laws have created a heightened sensitivity to where data is processed and how it’s used to train models.
Take banks as an example – historically they have viewed data as a fortress, a core asset to be guarded. Their culture of confidentiality and regulation makes them instinctively cautious about sharing information externally. Add to this the fact that large banks already have substantial internal technology infrastructures and budgets, and building seems logical on paper. The truth, however, is that building internally doesn’t eliminate compliance risk, but often amplifies it. This is because companies take on the burden of securing systems, updating controls, and managing ethical frameworks themselves.
On the other hand, buying from specialist providers means adopting a system that’s been engineered for compliance at scale. Purchasing doesn’t dilute compliance, it accelerates it, because you inherit the expertise and validation of teams who do this. In fact, most reputable AI vendors now far exceed enterprise compliance standards, designing privacy-preserving architectures that mitigate these risks far more effectively than in-house teams can, full-time.
Competitive Edge
The financial sector’s competitive edge increasingly lies not in owning the algorithms, but in applying them better and faster. Challenger banks and fintechs have embraced this: they buy tools (whereby anti-money-laundering and fraud detection platforms are incorporated into model-risk management protocols aligned with regulatory expectations), they integrate, and they move rapidly. Traditional banks, by contrast, are still in a transitional mindset, modernising legacy systems while trying to preserve control. That’s why their build programmes are often more about transformation theatre than tangible AI capability, and will ultimately see them fall further behind.
Underestimation of AI’s Lifecycle Cost
Beyond the issues of legacy thinking, poor data quality and compliance risk, companies attempting to build in-house also face a number of additional challenges when it comes to the talent, time, and technical debt needed.
Talent: True AI expertise is scarce and expensive. Competing with the open market for top data scientists and ML engineers is unsustainable for most enterprises.
Time: AI doesn’t stop evolving while your internal team builds. By the time a prototype is ready, the underlying technology stack may have already advanced.
Technical debt: Maintaining models, retraining on new data, and ensuring explainability and auditability over time all demand continuous investment.
Most companies underestimate this lifecycle cost by an order of magnitude. Add to that the reputational risk of bias or error (especially when deploying AI in customer-facing contexts) and the true cost of internal builds can spiral quickly.
A Change in Mindset is Needed
As more of these challenges surface, we should see an uptick in companies moving towards buying AI rather than building it – and it’s a pattern that’s thankfully already emerging. As AI becomes infrastructure, not novelty, enterprises will mirror the software evolution of the 1990s and 2000s: moving from bespoke builds to modular adoption.
The early adopters that buy today will pull ahead dramatically because they can focus on application and differentiation, not on maintenance. In time, the ‘build’ approach will be seen much like writing your own word processor in 1995: a costly distraction from real innovation.
Organisations need to shift from ownership to orchestration. This requires humility, recognising that innovation now happens outside corporate walls, and confidence – trusting that your value lies in how intelligently you deploy technology, not in whether you wrote its source code. Culturally, companies need to redefine ‘strategic advantage’ as agility plus insight, not possession plus control. AI isn’t an asset you own; it’s a capability you cultivate.
In simpler terms, the companies that thrive in the AI age will be those that treat AI as an ecosystem, not an ‘ego system’.
The Index shows industry stalwarts Visa and Mastercard outpacing their peers and delivering tangible AI outcomes thanks to early investments in talent and innovation.
Behind them, PayPal (3rd), American Express (4th), Stripe (5th) and Block (6th) emerge as the challengers. They outperformed the Index average, but are yet to match the leaders’ scale of deployment and outcome disclosure.
AI Moving from Experimentation to Deployment
Over the past two years, the 12 payments companies in the Index have publicly documented nearly 100 AI use cases. Underscoring how rapidly AI has moved from experimentation to deployment across core payment workflows. It’s a landscape defined by constantly evolving fraud threats and rising customer expectations for faultless, high-speed processing. Evident notes that nearly a third of these use cases disclose measurable outcomes, including efficiency gains, risk reduction and revenue uplift.
“Payments firms adopted AI out of necessity long before many other industries – their business models demanded it. Companies who invested early – like Visa and Mastercard – have gained a clear advantage over their peers, both in AI capabilities and the value their deployments are realising.” Alexandra Mousavizadeh, Co-Founder and Co-CEO of Evident.
Talent, Innovation, Leadership and Transparency
The Evident AI Index for Payments provides the most comprehensive independent benchmark of AI maturity across the industry. It is based on publicly available data around four pillars critical to successful AI deployment: Talent, Innovation, Leadership and Transparency.
According to Evident, Visa’s lead is based on consistent performance across the four pillars. And because it demonstrates the clearest evidence that AI is institutionalised across its core transaction network. Visa and Mastercard show maturity in areas such as fraud detection, cybersecurity and network-level risk reduction. Visa stands out for the scale and measurable impact of a handful of large, multi-year deployments focused on the integrity and security of its entire ecosystem.
“Mastercard shows strong evidence of scaled deployment and quantified performance improvements. Particularly in areas like fraud detection and AML tracing,” continued Mousavizadeh. “But what sets Visa apart is the degree to which the company is demonstrating impact at scale over multiple years. From applications of AI across its operations and network. It signals a shift from individual use cases to AI as institutional capability.
“What the Index also reveals is the importance of consistent innovation to maintain competitive advantage. With relatively nascent industry players like Stripe and Block performing well – and showing their AI potential reflected in their valuations – the Index leaders cannot afford to drop off the pace.”
AI Impact on Show, but ROI Reporting Scarce
Firms in the top half of the Index account for nearly 80% of use case disclosures (with the top three providing a significant 54%). Highlighting the link between AI maturity and the ability to scale deployment.
Visa performed strongly in this regard. For instance, its latest threat report disclosed advanced AI/ML blocked nearly 85% more fraud compared to one year prior. Similarly, when Mastercard incorporated Gen AI technology into its Decision Intelligence solution, initial modelling showed AI enhancements improved fraud detection rates from an average of 20% to as high as 300% in some instances.
However, Evident notes that no payments company has disclosed realised or projected ROI across all enterprise or group-wide AI activities.
“The Index leaders are locked in a tight race at a point when the thinking around corporate AI adoption is shifting – away from chasing the biggest models to building technologies that solve real operational problems efficiently,” commented Annabel Ayles, Co-Founder and Co-CEO of Evident. “Against this backdrop, the absence of ROI disclosure – or any group targets for AI ROI – is increasingly conspicuous. Currently, 1-in-5 banks now report on group-level AI returns. However, payments firms have yet to quantify the aggregate impact of their AI investments. To keep justifying this expenditure, the market will sooner or later demand clearer evidence of value.”
A Hotbed of AI Talent
The Index also reveals that the average payments company has over 30% more AI-focused workers than other financial institutions, despite substantially smaller employee numbers.
The three major card networks – Visa, Mastercard and American Express – account for nearly half (48%) of the payments industry’s AI talent stack. PayPal is currently the biggest employer, accounting for nearly a fifth (18%) of that AI talent.
PayPal’s AI talent has allowed it to build proprietary models tightly integrated with its data and workflows. Consequently, it accounts for nearly a quarter (24%) of the 98 AI use cases documented by its peers over the past two years – 1.7x as many AI applications as detailed by Visa or Mastercard.
“AI maturity is no longer defined by talent volume alone, and the Index leaders combine AI development, data engineering and product capabilities in ways that allow them to move rapidly from model experimentation to production deployment,” concluded Ayles.
The Evident AI Index Methodology
The Evident AI Payments Index ranks the AI maturity of 12 of the largest payment networks and processors across the globe. These 12 entities were chosen by aggregating the largest payment companies, with a minimum of $2B in annual revenue.
It is an independent, ‘outside-in’ assessment based exclusively on publicly available information. Each company was assessed against 60+ individual indicators, organised into four pillars critical to successful AI deployment at scale: Talent (45% weighting), Innovation (30%), Leadership (15%) and Transparency of Responsible AI activity (10%).
Data is gathered through a combination of extensive manual research and proprietary machine learning tools that extract key data points from company reporting and public disclosures (including press releases, investor relations materials, group-level website pages, group-level social media accounts, and media interviews with senior leadership), as well as a range of third-party data platforms.
Further information on the methodology of the Index can be found at evidentinsights.com
Adam Spearing, VP of AI GTM EMEA at ServiceNow, on why those that invest in AI foundations now will shape their operating models on their own terms
SHARE THIS STORY
Much of the debate around AI still centres on pilots: which tools to test, which use cases to prioritise, which risks to manage. Executive teams commission proofs of concept, establish governance forums and assess compliance exposure. Far less scrutiny is applied to the consequences of waiting.
Traditional technical debt is familiar territory for CIOs. It stems from shortcuts, ageing platforms and deferred upgrades. It builds over time and is eventually addressed through structured modernisation programmes. Visible in legacy code, brittle integrations and manual workarounds. It appears on risk registers and capital plans. Leaders know how to describe it and, in principle, how to resolve it.
Forward-looking technical debt is different. It arises when organisations postpone the foundational changes needed for new ways of working. It is not created by past expediency, but by present hesitation. And it accumulates faster.
AI Adoption
In the context of AI, the effects are already emerging. Each quarter spent debating readiness instead of building it increases the distance between legacy operating models and AI-enabled competitors. As models improve and user expectations shift, that distance widens, reshaping competitive baselines. What begins as a modest capability gap can harden into structural disadvantage.
While companies debate whether to adopt AI, the margin for strategic choice narrows. Many organisations frame AI adoption as a binary decision: adopt now or wait until the technology matures further. In practice, the room for discretion is smaller than it appears. Time spent stalled in pilots or governance loops increases the gap between internal capability and market expectation.
More than 75% of organisations are expected to face moderate to severe AI-related technical debt in 2026, predicts Forrester. The issue will not simply be missed efficiency gains. It will be structural misalignment between how their systems operate and how work is increasingly done.
This misalignment often appears gradually. Teams rely on manual data preparation because underlying systems cannot support automation. AI tools are layered onto fragmented architectures and deliver inconsistent outputs. Employees experiment with external tools because internal platforms cannot provide the functionality they need. Each workaround creates further fragmentation.
Over time, these patterns compound. Integration backlogs expand. Security and risk teams struggle to enforce consistent controls across proliferating tools. Data governance becomes reactive rather than designed. What began as caution begins to constrain strategic options.
The AI Paradox
Here’s the paradox: organisations are either rushing into unsuccessful AI pilots that create immediate technical debt, or they’re avoiding AI entirely and creating forward-looking debt through inaction. Both paths lead to the same place – systems that can’t support the future of work.
AI isn’t just another technology layer to bolt onto existing infrastructure. It’s fundamentally changing how people interact with systems and how work gets done. Increasingly, AI becomes an interface through which employees access information, execute tasks and navigate processes. When AI becomes the interface – not just for customers but for employees navigating their daily tasks – organisations without AI-ready foundations will find themselves unable to compete on speed, efficiency, or experience.
The companies that hesitate aren’t just missing out on automation benefits today. They’re building a deficit that grows exponentially as AI capabilities advance. Each new model release, each competitor’s successful implementation, each customer expectation shift adds to the debt. Each significant model improvement raises the performance benchmark across the market. Unlike legacy systems that degrade slowly, this gap accelerates.
From Avoidance to Advantage
Breaking free from forward-looking technical debt requires a fundamental mindset shift. This isn’t about buying more technology or launching more AI pilots. It’s about creating the conditions for sustainable AI adoption that builds capability rather than complexity.
The organisations succeeding with AI aren’t the ones with the biggest budgets or the most aggressive rollouts. They’re the ones that took a deliberate, phased approach to ensuring their data, systems, and culture could support AI at scale. They treated readiness as an operational discipline rather than an innovation side project. They understood that AI adoption isn’t a destination, it’s a continuous capability that requires solid foundations.
This starts with honest visibility into current technology estates. Leaders must understand what systems can realistically support AI workloads, where data quality creates barriers, and which processes are ready for automation. Only then can organisations introduce AI incrementally, modernising systems where necessary rather than forcing new capabilities onto brittle foundations. Without that clarity, AI risks being layered onto structural weaknesses.
Modernisation therefore becomes targeted. Consolidating fragmented workflows, standardising data models and reducing unnecessary integration points increase the feasibility of scaling AI across multiple use cases. Early deployments focused on well-defined processes with clear data lineage can build internal confidence while strengthening governance practices.
Clear Debt to Stay Competitive
Forward-looking technical debt does not appear on a balance sheet. It shows up in slower product cycles, manual workarounds, integration backlogs and frustrated employees. It surfaces when competitors deliver AI-assisted services as standard and customers begin to expect the same everywhere. By the time these symptoms are visible, the underlying gap has already widened.
Timing therefore becomes a strategic variable. AI capability builds cumulatively: early investment in clean data, modern workflows and interoperable systems creates a base for continuous improvement. Each iteration becomes easier, faster and more reliable. Those that delay face the opposite trajectory: increasing complexity, rising retrofit costs and shrinking room for strategic choice.
The real issue is not adoption in principle. It is whether leadership teams are prepared to treat readiness as urgent rather than optional.
Reducing forward-looking technical debt requires acting before competitive pressure dictates terms, aligning technology modernisation with operating model reform, and accepting that disciplined progress now is less risky than accelerated catch-up later.
AI adoption will continue irrespective of individual organisational hesitation. Vendors will continue to refine their offerings. Regulators will clarify expectations. Customers and employees will adjust their behaviours. Those that invest in foundations now will shape their operating models on their own terms. Those that delay risk reacting to a competitive gap that is already commercially significant.
Adonis Celestine, Senior Director – Global Automation Practice Lead at Applause, on the rise of AI and why In a world of autonomous systems, trust is the ultimate competitive advantage
SHARE THIS STORY
Every generation of technology has its defining disruptor – the force that rises above the rest and reshapes its environment. In the mid-2000s, Marc Andreessen captured the moment when digital systems began transforming entire industries with his famous line: “software is eating the world”. At the time, software was the apex predator of technology, defining how value was created and delivered. Today, that hierarchy has shifted. Artificial Intelligence (AI) has reached the top of the technology food chain. Not just accelerating software, but fundamentally reimagining how it’s created, tested, and deployed.
AI is no longer just a tool; it is a co-creator. Developers now rely on AI daily to translate high-level intentions into working code. A practice sometimes known as ‘vibe coding’. Tasks that once took months can now be delivered in weeks, days, or even minutes. The pace is exhilarating, but it introduces challenges that traditional quality assurance (QA) practices were never designed to meet. And if QA cannot keep up, speed will come at the cost of reliability and trust.
When AI Outpaces QA
Conventional QA depends on predictability. Features are defined, code is written, and test cases verify the expected behaviour. However, AI disrupts this traditional model. Generative and Agentic AI systems don’t simply follow instructions; they interpret them. These systems adapt to context, learn from data, and can produce different outputs from the same prompt, influenced by factors such as training, temperature settings, and the model’s probabilistic nature. With development cycles now measured in minutes, traditional QA handoffs are often impossible.
This has led to a growing gap between speed and certainty. Teams can ship products faster than ever, yet it’s becoming much more difficult to ensure consistent, ethical, or safe behaviour in real-world conditions. Enterprises are already experiencing AI-powered features that fail in ways conventional testing could not anticipate, undermining trust and creating new risks.
Hidden Risks in Autonomous AI Workflows
AI-driven development introduces blind spots that traditional QA often struggles to detect. One key issue is context drift. This occurs when AI performs well in controlled testing environments but behaves unpredictably when faced with edge cases, cultural differences, or ambiguous inputs. For example, a customer-facing chatbot might pass functional tests but produce biased or misleading responses when deployed on a global scale.
Another challenge is compound autonomy. When multiple AI agents are involved in code generation, testing, and deployment, the system may begin to validate its own processes. Without human oversight, errors can propagate unnoticed. An AI agent might ‘approve’ certain behaviours because they statistically align with previous outputs. Rather than meeting user or business expectations.
Invisible change also complicates QA efforts. AI models continuously evolve through processes like retraining, prompt tuning, or data updates. A feature that worked flawlessly last week may function differently today. Traditional regression testing often fails to capture these subtle but significant shifts.
Most critically, AI workflows blur the lines of accountability. When failures occur, it can be unclear whether the issue lies with the model, the data, the prompt, the integration, or the deployment pipeline. QA teams must continuously validate not only the outputs but also the decision-making processes behind them.
Redefining Quality and Trust in an AI World
Slowing AI development is neither practical nor beneficial. Organisations must redefine quality in a probabilistic, AI-driven environment. Quality now extends beyond just correctness. It involves ensuring that systems operate reliably in real-world scenarios. This shift requires moving from static test cases to continuous, adaptive validation.
QA teams must evolve into ‘quality intelligence’ teams, broadening their responsibilities from simply detecting defects to actively fostering trust in AI systems. AI-assisted testing is crucial in this process. It can automatically generate extensive test cases by analysing requirements and code patterns. It can predict defects using machine learning. Detect visual inconsistencies across devices, and produce realistic, privacy-compliant synthetic test data. Additionally, Agentic AI can autonomously maintain and self-heal test scripts, adjusting their logic as underlying code or user interfaces change.
Furthermore, AI systems themselves need rigorous evaluation. Techniques such as red teaming, rainbow teaming, benchmarking, bias and ethics checks, and drift monitoring are essential to help promote AI’s reliability, fairness, and alignment with business objectives.
Human oversight is critical. While AI can scale testing and automate numerous tasks, critical thinking, risk assessment, and judgment cannot be fully delegated. Humans must guide, validate, and refine AI outputs to maintain both quality and trust.
Emerging Roles and Responsibilities
AI is reshaping professional roles. Developers are increasingly using AI by instructing machines through natural language rather than traditional programming methods. This shift has led to the emergence of new roles such as AI agent orchestrators, prompt engineers, QA specialists for autonomous systems, and governance leads who ensure ethical and auditable AI practices.
These roles are essential for maintaining human oversight. Developers and testers must experiment, validate, and continuously refine AI outputs while being cautious not to rely too heavily on AI.
Trust in the Age of the Apex Predator
As with any apex predator, AI has changed the rules of the game. Software once “ate the world” by making systems programmable. Today, AI “eats software” by making it autonomous, capable of creating, modifying, and deploying autonomously. In this new environment, speed is no longer the ultimate measure of success; trust is. Systems may move fast, but without rigorous QA, ethical oversight, and human judgment, they may not be reliable, accurate or ethical.
The new apex predator demands adaptation. Organisations navigating this AI-driven era must embrace automation and innovation, but pair it with strong quality practices, governance, and continual human oversight. Only by combining these elements can companies ensure their AI systems are not only fast and efficient but also dependable and aligned with business objectives. In a world of autonomous systems, trust is the ultimate competitive advantage.
Tom Lanaway is Head of Innovation at Connective3, a global brand & performance marketing agency. He leads a team building AI-powered marketing measurement and marketing intelligence tools.
SHARE THIS STORY
Most businesses are asking the wrong question about AI. They’re asking, ‘Which AI tool should we use?’ They should be asking: ‘Can our people actually think with AI?’
I run an innovation team at a marketing agency. We’ve spent the last two years building AI into everything we do, including measurement, content, strategy, and automation. We’ve got lots of tools, 18 different products to be precise.
Below is what I’ve learned. But the tools aren’t always the bottleneck; sometimes the skills are.
The Tennis Racket Problem
A colleague put it perfectly recently: “AI is a tool. Think of it as if you’ve got a smart assistant sat there. But it’s saying, I’m going to give you the best tennis racket, now go and play in a Grand Slam.”
That metaphor stuck with me because it captures something the artificial intelligence hype cycle keeps missing. We’ve convinced ourselves it democratises everything. That anyone can now do anything. That the barrier to entry has collapsed. And there’s truth in that, but it’s incomplete. The barrier to access has collapsed, but the barrier to effectiveness hasn’t. Give someone GPT-4, and they can generate text. Give them the best tennis racket, and they can hit a ball. But the gap between hitting a ball and playing at Wimbledon is still vast. Most organisations are stuck in that gap, wondering why their AI investments aren’t transforming anything.
Three Skills That Aren’t Always Present
When I look at where teams struggle and where I see the same patterns across other businesses, three specific competencies keep showing up as gaps:
1. Problem Decomposition
Not everyone knows how to break down complex work into chunks that AI can help with. This sounds simple, but it isn’t. Most people approach AI with whole tasks such as ‘Write me a marketing strategy’, ‘Analyse this data’ Or ‘Create a campaign’. AI will then produce something, but it’s usually mediocre, because the person hasn’t done the harder work of understanding which specific parts of that task AI is good at, and which parts need human judgment. The skill isn’t using AI; it’s knowing what to give it. Someone who is brilliant at their job but can’t decompose problems will get worse results from AI than someone more junior who understands how to break work into the right pieces.
2. Output Assessment
How do you know if what AI gives you is good? This is where intuition becomes essential and it’s also where the ‘AI replaces expertise’ narrative falls apart. You need domain knowledge to evaluate AI output. You need enough experience to feel when something’s off, even if you can’t immediately articulate why. You need the pattern recognition that comes from years of doing the actual work. Artificial Intelligence doesn’t replace that intuition; it requires it. The best AI users I’ve observed aren’t the most technical; they’re the ones who’ve built up enough expertise in their field to quickly assess whether AI output is useful, directionally correct, or completely off base. They know what good looks like, so they can recognise it when they see it, or notice when it’s missing.
3. Articulation
Can you clearly express what you really want? This is the unglamorous core of the whole thing. Some people struggle to articulate their requirements to other humans, let alone to AI. We’ve all sat in meetings where someone spends 20 minutes explaining what they need, and you’re still not sure what they want. AI makes that problem worse. The skill isn’t ‘prompt engineering’ in the technical sense; it’s the much older skill of clear thinking and clear communication. If you can’t articulate what you want specifically, precisely, with the right context and constraints, you won’t get useful output from AI or from anyone else.
The Uncomfortable Implication
Here’s what this means for how businesses should think about AI investment:
Stop leading with tools: Most organisations have tool fatigue already. Another platform, another integration, another training session on which buttons to click. It’s not working.
Start with the human work: Before asking ‘What AI should we use?’, ask ‘Can our people break down problems, assess output, and articulate requirements?’ If they can’t do those things well without AI, they won’t do them well with AI either.
Invest in the skills, not just the access: This doesn’t mean AI prompt engineering courses; it means developing clearer thinking, better problem decomposition, and sharper articulation. These are old skills, applied to new tools.
Accept that expertise still matters: The people who’ll use AI best are the ones who already know their domain deeply. AI amplifies competence; it doesn’t create it.
Connected Intelligence Isn’t About Connected Systems
I’ve spent a lot of time thinking about how different marketing channels and data sources connect and how you build intelligence across systems rather than in silos.
But I’ve come to think the more important connection isn’t between systems, it’s between human judgment and AI capability. The integration layer that matters most is the one between the person and the tool.
Get that wrong, and it doesn’t matter how sophisticated your AI stack is. Get it right, and even basic tools become powerful.
Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams…
SHARE THIS STORY
Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams to identify and respond quickly, whilst also meeting regulatory timeframes for handling complaints and supporting vulnerable customers.
Netcall: AI-Powered Sentiment
The specialist bank has worked with Netcall to deploy AI-powered sentiment analysis using Netcall’s Liberty Create platform. The solution reduces manual effort and improves operational efficiency by bringing customer emails from multiple mailboxes into a single interface. Incoming messages are automatically analysed to identify dissatisfaction, highlighting cases that may require faster intervention. This allows urgent cases to be prioritised, helping HTB to resolve issues before they escalate and improve the customer experience.
“Our AI-powered sentiment analysis solution rapidly processes vast amounts of email data. Its efficiency allows our team to focus on resolving customer enquiries and issues rather than sorting priorities. The streamlined process ensures swifter responses and better customer outcomes, upholding our reputation for exceptional customer service.” Ed Eames, Head of Customer Savings Operations at Hampshire Trust Bank.
The application was built by the Hampshire Trust Bank development team using Liberty Create. It worked closely with Netcall to integrate AI sentiment analysis into existing processes. Customer-facing teams were involved throughout to ensure the solution aligned with established workflows and regulatory requirements.
Customer Service Control
A key benefit of the approach is the level of control it gives internal teams. Keywords, sentiment thresholds, and classifications can be adjusted directly. This allows rapid refinement as customer behaviour changes or new regulatory considerations emerge, without waiting for development cycles.
“Liberty Create has enabled my development team to work with remarkable agility. The ability to rapidly create and refine applications to meet ever-evolving business needs has significantly enhanced our efficiency. This allows us to deliver a wealth of new features to end users and customers with speed. With the integration of AI, we’ve been able to advance our processes while ensuring exceptional customer service. Our Sentiment Analysis application launch is a prime example of this.” Trina Burnett, Head of Engineering at Hampshire Trust Bank.
The sentiment analysis system also supports automated and ad-hoc reporting. This provides a single source of insight into customer interactions and actions taken. This helps reduce manual effort, supports audit and compliance activity, and enables teams to continuously improve customer service operations.
“As scrutiny around customer experience and accountability increases across UK financial services, the ability to listen, adapt and respond at pace is becoming a defining capability for banks seeking to maintain trust and service standards,” said Alex Ballingall, Key Account Manager at Netcall.
“HTB’s approach shows how banks can use AI-driven insight practically. Turning customer communications into faster action without adding operational complexity,” Ballingall concluded.
About Netcall
Netcall is a leading provider of low-code and customer engagement solutions. A UK company quoted on the AIM market of the London Stock Exchange. By enabling customer-facing and IT talent to collaborate, Netcall takes the pain out of big change projects. It helps businesses dramatically improve the customer experience, while lowering costs. Over 600 organisations in financial services, insurance, local government and healthcare use the Netcall Liberty platform to make life easier for the people they serve. Netcall aims to help organisations radically improve customer experience through collaborative CX.
Gregory Mostyn, CEO and co-founder of Wexler, on why the era of generalist AI tools is over, and how the future will focus on high-precision AI designed for specific industries
SHARE THIS STORY
For decades, the UK’s professional services sector, including areas such as Law, Insurance, and Wealth Management, has argued that its business value is locked in its access to proprietary data and the specialised labour required to navigate it. Investors, lured by the moat of institutional knowledge, priced these companies accordingly. However, the first quarter of 2026 has seen significant AI disruption within the professional services market. The catalyst wasn’t a single event, but rather a move by foundational model providers that turned the industry’s most defensible assets into commodities.
When Anthropic launched its specialised legal AI plugin, OpenAI integrated a real-time insurance underwriting engine directly into its interface, and Alturist Corp automated bespoke tax strategies, the market reacted harshly. As professional services titans such as RELX, MoneySuperMarket, and St James’s Place saw their share prices decline by more than 10% in a matter of hours, the message became clear: the era of treating AI as a ‘future risk’ is over.
The market has been awoken to the fact that foundational AI models are no longer just plugins or nice ‘add-on’ tools; they are competitors. The move by foundation-model providers into professional services – like the legal sector – is not a one-off shock, but rather an inevitability.
The Proliferation of Information
Historically, a law firm’s competitive advantage was its access to information – repositories of case law, proprietary research, and historical contracts. Investors and clients valued these companies on the assumption that this data constituted an impenetrable barrier to competitors. Before AI entered the mainstream, the cost of extracting actionable information from thousands of pages of data required a small army of junior associates and hundreds of billable hours.
In 2026, that moat has mostly evaporated. Recent benchmarks show that frontier models now achieve 80% accuracy on complex documents, compared with the 71% average of a human associate. More importantly, they do it at a fraction of the cost. It is now estimated that the inference cost for a system at the level of GPT-3.5 dropped by more than 280-fold between November 2022 and October 2024. It’s predicted that UK law firms will reduce their chargeable hours by 16% through the implementation of AI.
The narrative that AI would be able to handle only ‘low-level’ tasks, such as NDAs or simple contract summaries, has all but evaporated. Anthropic’s move into high-stakes litigation support validates this trend.
AI – From Swiss Army Knives to Scalpels
An error made by many law firms when AI became entrenched within the market was to treat it as a ‘plug-in’, a nice-to-have built onto existing internal software. Many adopted general-purpose tools, often referred to as ‘Swiss Army knife’ solutions, that covered the breadth of legal work but lacked the precision, jurisdictional nuance, and risk-weighted requirements for high-stakes professional services.
The 2026 market reaction highlighted the needs of a ‘scalpel’ approach – those that go deep in a specialised vertical within a legal workflow. For example, instead of a junior associate spending billable hours searching through case files to establish the facts of a case, they could use a ‘fact intelligence’ platform that can automate that process into minutes, whilst increasing accuracy by 95% versus 78% for human reviewers and up to 90% savings in large-scale litigation. The market is no longer rewarding firms for having information. Rather, it rewards those who can apply it at the lowest possible cost and friction.
Reallocating Capital Across Professional Services
We’re already seeing investors withdrawing from the traditional software market and reallocating that capital into specialised AI firms. However, the risk for legacy players is that they are being disrupted from both ends. From the bottom, they are losing the efficiency game to generalist foundation models from companies such as OpenAI and Google, which are commoditising the ‘knowledge’ aspect of professional services, including basic advice and contract drafting. At the top, they are losing the expertise game to specialised firms that use AI as a precision instrument; their overhead would be lower than that of a traditional Magic Circle firm, allowing them to undercut prices while maintaining profit margins.
The result is a massive reallocation of capital. Investments into vertical AI (AI built for one specific industry) are expected to surge to $115 billion by 2034. The market no longer bets on labour with tools, but on autonomous workflows. Investors have realised that the value lies in the middle layer – the software that sits between a general foundation model and a specific industry’s needs.
Innovation or Obsolescence
So far, the first market fluctuation of 2026 has taught us that you cannot outrun new technologies. To survive, firms must stop treating AI as an add-on and treat it as a foundation for their core business infrastructure.
For UK professional services, the choice is no longer whether to adopt AI, but whether they can evolve quickly enough to avoid becoming the training data for companies building foundational models. The firms that remain in 2030 will recognise that the competitive landscape has changed. You’re not just competing with your peers, but with the compute cycles of the world’s most powerful AI labs.
The era of generalist AI tools is over, and the future will focus on high-precision AI designed for specific industries.
Jack Bingham, Regional Director of Digital Native UK, Ireland & South Africa, Confluent on how data, treated properly, compounds in value to drive digital disruption
SHARE THIS STORY
When I talk to founders and tech leaders, one question seems to consistently come up: what separates today’s disruptors from the last decade’s? In 2010, being cloud-first was what made investors sit up and take note. In 2026, it will be streaming-first.
I’ve spent the last year or so working closely with companies that are, quite literally, building their businesses in real time. For them, real-time capability isn’t a department or a layer that supports the business. It is the business. The acid test is simple: how quickly can you capture a critical event – a payment, a login, a failed delivery – and respond with the next best action? That focus shapes how they build products, structure teams, and think about innovation.
Here’s what I’ve learned from them:
Lesson 1: Data is a Product, Not a By-Product
Many traditional companies still treat data as something to collect, store, and analyse later. The new generation of businesses, on the other hand, treats it as a reusable, governed product that everyone can access. When it’s built and shared this way, teams stop rebuilding the same foundations for every new use case. They move faster because they’re working from a single, trusted view of the truth, shortening product cycles, speeding up iteration, and spending more time solving problems that matter.
That mindset, rather than the size of the tech stack or the number of engineers, is what sets disruptive businesses apart. In these organisations, technology, data, and business strategy move in lockstep. Decisions aren’t passed up and down hierarchies, they’re made by teams who understand both the data and the customer problem in front of them.
When you can trust your data and respond in real time, innovation stops being a department. It becomes a reflex.
Lesson 2: Real-Time isn’t a Feature, it’s a Foundation
A few years ago, one of the world’s largest supermarket chains realised it didn’t have a single real-time view of its inventory. Without that visibility, omnichannel experiences were impossible. Once it shifted to a streaming architecture, every transaction became a live event that updated stock, triggered supply chains, and even made it possible to get your groceries delivered straight to your kitchen fridge – coordinated through live inventory data, smart home devices, and real-time security feeds.
That’s the practical power of streaming: it connects what happens in your business to what should happen next so you can provide products and services that take customer satisfaction to a whole other level. Real-time data stops being a reporting tool and becomes the foundation of every decision, interaction, and innovation.
I often ask businesses what they would do differently, if they knew the state of every event in their organisation. The most forward-thinking companies already have the answer. They’re using streaming to turn business events into reusable building blocks, creating new experiences by connecting the data they already have in smarter ways.
Lesson 3: Culture is the Multiplier
Being streaming-first is only half about architecture. The other half is attitude. The best digital enterprises don’t wait for permission to experiment. They map their most important business events, align teams around them, and empower people at every level to react fast and learn faster.
And the difference is visible. Feedback loops are shorter. Structures are flatter. Failure is treated as information. This culture of continuous experimentation is why these companies can move at the pace they do.
We often run ‘Event Storming’ workshops with teams to map their critical business events. The idea is to create alignment – getting people from engineering, product, and operations to agree on what really matters and how those moments connect. That process reveals a lot.
Digital disruptors go beyond simply deploying streaming architectures. They build streaming mindsets. Leadership plays a crucial role here: data must be treated as a strategic asset. If it isn’t up top, it won’t be anywhere else in the organisation either.
Lesson 4: Streaming and AI will Converge
AI is only as good as the data you feed it. Unfortunately, most enterprises are still feeding it yesterday’s data. Streaming-first companies already know this. They’re building intelligent data pipelines that give AI the context it needs to make decisions in real time.
That’s how the next generation of innovators will pull ahead: not by having bigger models, but by having cleaner, faster, more connected data. Streaming is what will let AI move from reactive to predictive… and from predictive to autonomous.
Too many organisations are cutting investment in data while pouring money into AI projects. But AI without quality data is just expensive guesswork. The companies doing this well understand that data has to be a product in its own right. And when business and technology teams design around that shared understanding, innovation follows naturally.
Lesson 5: The Mindset of the Next Disruptors
If I were starting a company tomorrow, I’d look closely at the critical events that run my business. I’d then make sure I had a way to capture those in the stream, make them reusable, and build every product and process around them.
When your business can see and act on what’s happening in the moment, you gain something no traditional architecture can give you: time. And in the next wave of disruption, that’s the only advantage that really matters.
If we look to who we can learn from in the coming months, it’s financial services and healthcare that are moving the fastest. Real-time fraud detection, patient monitoring, and risk management are becoming operational necessities – and these industries will set the benchmark for real-time data excellence.
Looking Ahead to 2026
By 2026, I don’t think we’ll talk about ‘real-time’ as a differentiator. It will simply be how modern businesses operate. Batch systems won’t disappear, but they’ll coexist within a single, streaming-first platform that delivers data whenever it’s needed.
Once every process can react instantly, the question then becomes: can it anticipate? Can it learn? That’s where AI and streaming meet and where we move from reactive to autonomous enterprises that not only respond to the present but adapt to what’s coming next.
Data, treated properly, compounds in value. The decisions you make with it become faster, sharper, and more confident. The companies that understand this will be the ones still leading when today’s titans look like yesterday’s news.
JP Cavanna, Director of Cybersecurity at Six Degrees, on balancing the risks and benefits of AI in cyber defence strategies
SHARE THIS STORY
Undeniably, AI is here to stay. Having become part of day-to-day life, it’s hard to remember what life was like without it. But when it comes to cybersecurity, is it causing more harm than good?
Recent research outlines that 73% of organisations have already integrated AI into their security posture. The technology is clearly becoming a cornerstone of modern cybersecurity. Organisations are turning to AI not just as a tool, but as a partner in security operations, leveraging its capabilities to identify malicious activity faster, guide investigations, and automate repetitive tasks.
For it to be truly effective, though, AI must be paired with human expertise – but this is where organisations are starting to become complacent. Given the growing sophistication of cyber-attacks, and even AI-powered attacks, many are removing the human element while expecting AI tools to do all the work for them, leaving them even more vulnerable to threats. This overreliance risks creating blind spots, where critical thinking, contextual understanding, and instinct are overlooked. Without the balance of human judgement, AI can amplify mistakes at scale, turning efficiency into exposure.
The Cybersecurity Paradox
This situation puts many organisations in a potentially difficult position. On the one hand, AI can significantly improve the efficiency of security operations. In the typical SOC, for example, AI technologies can process alerts in around 10-15 minutes. This represents a significant improvement over human analysts, who can easily require twice as long for the same task.
Aside from the obvious efficiency gains, applying AI to these repetitive, time-pressured processes can also significantly reduce the scope for human error. And in turn, take considerable pressure off security analysts. Going some way to battling alert fatigue, an increasingly well-documented and persistent problem. In these circumstances, valuable human experience and specialist expertise can instead be more effectively applied to complex investigations, strategic decision-making, and other higher-value priorities.
On the flipside, however, AI remains prone to generating inaccurate or misleading insights, and users may not realise they are applying the wrong information to potentially serious security issues. Similarly, habitual blind trust in AI outputs can easily erode performance levels and even introduce new vulnerabilities. There is also scope for sensitive data to enter public environments, with the potential to cause compliance issues. This kind of information can also reappear in future versions of the AI model in question, therefore resulting in further data exposure risks.
Parallels with IoT Adoption
The situation mirrors that seen in the early days of IoT adoption, where the rush to innovate would often override security considerations. In this current context, therefore, human oversight and vigilance are extremely important. Clear governance frameworks, defined accountability, and continuous monitoring must underpin any AI deployment. Therefore ensuring that innovation does not outpace risk management or compromise long-term resilience.
A Growing Arms Race
If that wasn’t challenging enough, threat actors are also in on the AI boom in what has already been described as an ‘arms race’. In practical terms, AI tools are already widely used to create more convincing phishing attacks free from some of the more obvious traditional tell-tale signs of criminal intent, such as imperfect grammar or a suspicious tone.
Deepfake technology has also raised the stakes. We’ve all seen how convincing AI-generated video has already become. This is now finding its way into real-world examples, with one fake video reportedly causing a CFO to authorise a large financial transfer as a result.
At the same time, technology infrastructure is constantly under attack by AI-powered tools. They can be used to analyse defensive systems and identify weaknesses faster than humans. The net result of these developments is that defenders constantly play catch-up, as they can only respond to new attack vectors once discovered. The underlying takeaway is that at present, AI cannot be trusted to operate autonomously. Instead, human intuition, scepticism and contextual understanding remain essential to spotting emerging tactics.
As attackers refine their methods at machine speed, organisations need to resist the temptation to match automation with automation alone. They must double down on strategic thinking and continuous skills development.
Balancing Benefits and Risk
So, where does this leave security leaders who are looking to balance the benefits and risks? Firstly, and to underline a fundamental point, while AI offers scale and speed, it cannot replace critical human oversight. Organisations should view AI as an enhancer, not a replacer. Success lies in promoting partnership, not substitution.
Strong governance is vital. This should start with clear AI usage policies that define what can and cannot be shared with AI tools, while proper data classification and access control ensure that sensitive information is protected. In addition, regular validation of AI outputs can help to prevent inaccurate or misleading results from being unnecessarily acted upon.
Then there are the perennial challenges associated with employee awareness training, which is vital for avoiding complacency and understanding the limitations of generative AI tools. Cyber leaders should also monitor how AI is being used inside and outside the corporate environment, as staff often experiment with tools on personal devices.
Get this all right, and security teams can put themselves in a very strong position to embrace AI, safe in the knowledge that they have the guardrails and processes in place to balance innovation and efficiency with effective human-led oversight. Ultimately, success will depend not on how much AI is deployed, but on how intelligently it is governed and refined alongside the people responsible for securing an organisation.
Dan Nichols, Chief Technology Officer at virtualDCS, on why cloud resilience in the financial services sector hinges on shared accountability and an assume-breach philosophy
SHARE THIS STORY
A powerful catalyst for transformation, the cloud is reshaping how organisations compete in the financial services sector. Beyond significant cost savings and flexibility, leaders are eager to unlock the potential of AI-driven insights, intelligent automation, and real-time business modelling. And, in a space governed so strictly by data sovereignty and privacy policies, the cloud’s ability to localise, encrypt, and control data has made it a key enabler of compliance and customer confidence.
But as threats become more frequent and sophisticated – with attackers now targeting shared platforms and partner supply chains – organisations can no longer rely on their own defences alone. For true digital resilience, shared accountability, collective readiness, and clear governance across every cloud touchpoint are equally non-negotiable.
All Eyes on the Money
The industry sits at a valuable intersection of data, technology, and finance. A combination that makes it uniquely attractive to attackers. It holds some of the world’s most sensitive data, directly underpins the flow of global capital, and operates through deeply complex and interconnected systems. With every integration increasing the risk of exposure. Ultimately, the attack motivation is as simple and relentless as it is in most sectors: monetary gain. Cybercriminals target institutions precisely because of the value at stake and the speed at which disruption translates to loss.
How the Threat Landscape is Evolving
Ransomware groups may see insurers and payment providers as high-yield targets. They understand even seconds of downtime can induce multi-million pound losses. Under pressure to protect customer trust and avoid regulatory penalties, some firms may choose to pay in order to restore their service quickly. This dangerous perception only encourages repeat targeting and paves the way for damage to spread even further. Yet it remains a common response tactic among many.
At the same time, the rise of supply chain and third-party attacks has made it possible for criminals to bypass even the most well-defended cloud environments. By exploiting shared platforms, managed service providers, and cloud-hosted applications, perpetrators can move laterally across multiple organisations at once, amplifying both the reach and impact of their attacks. In other words, infiltrating one vendor’s weakness can cripple an entire network in one carefully coordinated strike. And, since some firms may overlook the cloud’s shared responsibility model – presuming end-to-end security sits solely with their cloud provider – multiple blind spots can inevitably emerge, creating easy openings to exploit.
In an environment where boundaries blur and dependencies multiply, traditional perimeter-based defences are no longer enough. Hybrid and multi-cloud infrastructures demand continuous visibility, faster detection, and coordinated response across every partner and provider. The goal is not simply to prevent breaches, but to withstand and recover from them collectively. It’s about recognising that in today’s ecosystem, no financial institution is secure in isolation.
Inside the Ransomware Economy
Evolving beyond the scattergun attacks of the past, ransomware now operates as a professionalised, profit-driven ecosystem, where malicious actors collaborate, trade intelligence, and lease attack tools much like legitimate software vendors. The rise of ransomware-as-a-service (RaaS) has even lowered the barrier to entry, giving less skilled affiliates access to ready-made payloads and automated encryption kits in exchange for a percentage of the ransom.
What makes it especially destructive is the precision and psychology behind the attacks. Rather than randomly striking, attackers conduct weeks of reconnaissance – learning behaviours, studying employee hierarchies, and identifying systems most critical to operations. They often infiltrate through phishing emails or compromised credentials, quietly moving laterally through the network to gain elevated access. Once embedded, they disable defences, exfiltrate sensitive data, and target backup repositories before finally encrypting production systems.
At that point, the goal shifts from technical control to financial coercion. Victims are locked out of their systems and presented with a ransom note demanding payment, sometimes in cryptocurrency, in exchange for a decryption key. Increasingly, the threat includes public exposure of stolen data – a tactic designed to pressure leadership into paying to protect their reputation and customer trust. Even when ransoms are paid, recovery is rarely clean: data may be incomplete, corrupted, or resold on the dark web, and repeat targeting is common once an organisation is identified as a payer.
It’s this blend of stealth, strategy, and human manipulation that makes ransomware so difficult to defend against. By the time the encryption begins, attackers have already spent weeks ensuring recovery options are limited. This background isn’t designed to scaremonger, but to highlight why resilience must start long before an attack ever reaches the endpoint.
The Foundations of Ransomware Resilience
Ransomware resilience isn’t achieved through a single product or policy – it’s the outcome of strategic, technical, and cultural alignment. Financial institutions, in particular, must approach it as a continuous process of readiness: Anticipating compromise, containing impact, and restoring normality quickly and transparently:
Assume-Breach Philosophy
The first step is shifting from a defensive mindset to an assume-breach philosophy. In practice, this means recognising that even the most sophisticated systems can and will be breached – and building architectures and response strategies designed to limit damage when this happens. It’s a pragmatic approach, grounded in the reality that attackers are increasingly sector agnostic. No organisation is too small or too secure to be targeted, but the financial sector remains a favourite because it offers both high disruption value and potentially significant monetary reward.
Building meaningful resilience, therefore, demands layered defence and disciplined execution. The goal is to slow attackers down at every stage – detecting them early, limiting lateral movement, and ensuring business continuity when systems are disrupted. Behavioural analytics and continuous monitoring can surface and neutralise subtle anomalies that would otherwise go unnoticed – such as phishing, spear phishing, and malware, with email still the number one entry point for ransomware.
Zero Trust & MFA
Meanwhile, zero trust policies and multi-factor authentication methods add a second layer of protection, blocking unauthorised access even if credentials are compromised.
When incidents do occur, a well-practised response framework ensures action is fast and coordinated, minimising disruption across critical systems, with the ability to switch to secure replica environments to keep operations running while remediation takes place. Secure, immutable, air-gapped backups underpin it all, providing a safety net that guarantees recovery can begin from a clean and uncompromised state.
Human readiness is equally critical. Technology can contain an attack, but only people can recover from one effectively. Regular simulation exercises, incident rehearsals, and cybersecurity awareness training help teams respond calmly and cohesively, transforming response from reactive to instinctive. This operational maturity is reinforced by strong governance. Frameworks such as DORA, NIST, and ISO 27001 provide the structure to align technical teams, compliance leads, and executive decision-makers around shared resilience goals. When combined with skilled practitioners and clear accountability, they embed security into ‘business as usual’ – moving resilience from a strategy to a sustained organisational capability.
Why Multi-Layered Backup is Critical
When ransomware strikes, the speed and integrity of data recovery determine whether disruption lasts minutes or days – and whether the impact cascades through wider global markets. As the last and most decisive line of defence when every other control fails, it’s also fundamental to customer trust and compliance. Yet too often, backup is treated as a static safeguard rather than a dynamic resilience layer.
Since modern ransomware often seeks out and encrypts traditional backups first, a single backup copy or centralised repository is no longer sufficient. True resilience today depends on a multi-layered approach – combining offsite or cloud-diverse storage, immutable data copies that cannot be altered or deleted, and isolated environments to protect against lateral movement.
How frequently these backups are tested is equally important. Too often, financial institutions only discover weaknesses when recovery is already underway, at which point strategies can’t be magically strengthened, and it becomes a race against the clock to minimise downtime and reputational fallout. Regular, automated recovery testing changes that dynamic. It not only confirms that files can be restored, but provides verifiable assurance that systems come back online in the correct order, data dependencies remain intact, and teams have the muscle memory to act quickly and confidently when the worst happens.
The Power of Shared Accountability
In a digital economy so deeply interconnected, no organisation operates in isolation. This is especially true in financial services, where supply chains and service providers form the backbone of day-to-day operations. While this interdependence is a strength in many ways, it also means resilience is no longer defined by how well a single institution can defend itself, but by how effectively every partner in its ecosystem upholds their part of the security chain.
This is where shared accountability becomes critical. It recognises that cloud providers, managed service partners, and financial institutions each have distinct but complementary roles to play in securing data, systems, and infrastructure. When accountability is clearly defined – and when partners collaborate rather than operate in silos – visibility improves, incident response accelerates, and the risk of systemic failure decreases.
Shared accountability also extends beyond contractual obligation. It’s about building a culture of collective readiness: sharing intelligence, rehearsing joint incident scenarios, and supporting smaller or less-resourced partners to raise their security baseline. The result is a unified entity capable of anticipating, absorbing, and recovering from disruption together.
Looking Ahead
To view cyberattacks as inevitable might seem pessimistic to some, but it’s an unfortunate truth that no amount of investment can eliminate risk entirely. In an era where threats are growing in both scale and sophistication, readiness becomes the true differentiator – particularly in such a high-stakes sector. For financial institutions, that means embedding security into culture, strengthening connections across supply chains, and continually testing their ability to withstand and recover as a united ecosystem. Only then can resilience become a strategic advantage rather than a defensive necessity, and unlock the cloud’s transformative potential with absolute confidence.
Ben Goldin, Founder and CEO of Plumery, explores the key banking trends for 2026 – from fraud and digital assets to stablecoins and AI applications
SHARE THIS STORY
As we head into the second half of the decade, several emerging trends will come to the fore in 2026. The interconnectedness among these trends is also noteworthy. Artificial intelligence (AI) and progressive modernisation act as common threads.
A strong current throughout 2026 is the shift from customer-first banking to human-first banking. This relates to the concept of ethical banking. It focuses on creating financial services that have a positive social and environmental impact.
Human-first banking aims to get even closer to the customer by understanding their actual human needs, rather than just consumer needs. For example, a bank should be acting as a coach to improve a customer’s financial health, not solely as an advisor on which products they should buy. Banks can build trust in a digital world through tailored and empathetic interactions, effectively simulating the experience customers formerly had with their personal banker.
To attain that level of hyper-personalisation, banks will need to be capable of processing vast amounts of transactional data, which can only be accomplished by deploying AI and big data tools. This requirement, in turn, will turbocharge progressive modernisation, another trend that has been bubbling under the surface for the past few years.
Traditional banks are using progressive modernisation to deal with legacy infrastructure that is not fit for purpose in a digital-first, AI-driven world. Instead of a big bang replacement of core banking systems, which is risky and can take years, banks are creating change from within existing architecture. Banking is leveraging technologies that support a multi-core strategy. With this approach, banks can add new cores for specific products that require greater agility and innovation. Modern cores are necessary for deploying the latest AI and big data tools because they provide a unified, real-time data foundation to deliver hyper-personalisation.
Fraud Threats
Fraud will remain a top concern throughout 2026. Adversaries use AI to expand the range of techniques, such as impersonation scams and identity theft, as well as accelerate and scale fraudulent activity.
According to the UK Finance Half Year Fraud Report 2025, £629.3 million was stolen by criminals in the first six months of this year, and there were 2.09 million confirmed cases across both authorised and unauthorised fraud. Card not present cases rose 22% to 1.65 million and accounted for 58% of all unauthorised fraud losses.
However, the good news is that there was a 21% increase in prevented card fraud in the first half of 2025. The £682 million which was stopped from being stolen is the highest-ever figure reported.
To combat fraud, new and improved tools to help banks identify, verify and onboard customers will come to market in 2026. The move away from paper-based identity (ID) and widespread adoption of digital ID will play a key role in the fight against fraud. Hence the UK government’s recently announced plans to roll out a new digital ID scheme.
In addition, I expect to see a fundamental shift in fraud detection using real-time behavioural analytics, data analytics for proactive risk identification, and other applications of AI and machine learning in this space.
Digital Assets and Stablecoins
Digital ID verification is also essential for fighting fraud in the digital assets and stablecoins space. Another hot topic at several banking and payments industry conferences last year.
In 2026, digital assets and stablecoins will become much more mainstream. Banks have left the sidelines and are now actively engaged with running pilots. For example, in September a consortium of nine European banks, including CaixaBank, ING and UniCredit, announced an initiative to launch a euro-denominated stablecoin.
Central banks and regulators are developing a comprehensive agenda for digital assets. Banks will need to blend traditional fiat currencies and assets with their digital counterparts. This trend is also driving a progressive modernisation approach, as legacy core banking systems weren’t designed to manage digital assets, nor do they support moving money via blockchain-based rails. I expect to see more banks looking to deploy a multi-core strategy where digital assets are managed and stored elsewhere, but they can still provide a seamless and unified experience to customers.
AI
Last year, I predicted that the industry would adopt a ‘meet-in-the-middle’ approach to AI, with banks beginning to uncover the real value that the technology can deliver. I also predicted consolidation, recalibration and stabilisation in the market.
GenAI Banking Applications
My predictions held true, by and large. In 2025, institutions explored what is possible, relevant and achievable within the banking context, then specifically for each individual institution within its legacy architectures and technological environments.
This trend will evolve into more practical actions and initiatives over the next 12 months to provide greater clarity around where GenAI shines versus where it’s not applicable.
To gain clarity, it’s important to understand the difference between AI and GenAI. The latter is built on stochastic principles, which uses probability to model systems that appear to vary in a random manner. This means that the same input could potentially generate different outputs – this isn’t acceptable for automated financial operations, which requires much more determinism. Hence, I believe that GenAI will be used chiefly in scenarios where there’s human intervention.
One area where GenAI is applicable is in conversational applications. For example, banks will begin launching more interactive user interfaces. Customers will be able to interact with the bank as they would a human. Moving beyond simple, frequently asked questions to actual actions.
GenAI in the Back Office
Similarly in the back office, banks can leverage GenAI to provide guidance to their employees and accelerate certain tasks. Using the technology to improve efficiency and help staff do more will have a positive impact on customer experience. Processes will take much less time.
It will also help to bring unbanked segments or non-standard customers, which are difficult and costly to onboard because they require a bespoke assessment, into regulated financial services. Applying GenAI can make the bespoke process much more efficient by providing data-driven insights to support faster and smarter decision-making. This will make it much cheaper to serve these segments. Including smaller and medium-sized enterprises, which will drive financial inclusion and improve customers’ financial health.
Jan Van Hoecke, VP AI Services at iManage and a highly experienced computer scientist with a passion for technology and problem-solving. on navigating the AI landscape for success in 2026
SHARE THIS STORY
The AI landscape faces a number of big shifts in 2026. Agentic AI will undergo a reality check as enterprises discover the gap between marketing hype and actual capabilities, while organisations will go through a mindset change from treating AI hallucinations as crises to managing them, acknowledging the inherent limitations of the technology. There will also be a shift in how data will be structured in AI systems, to help the move from just finding facts (“what”) to understanding reasons (“why”). Middleware application providers will face new challenges, as those vendors controlling both platforms and data will become more influential. Finally, standardised AI chat interfaces will evolve into smarter, dynamically generated, task-specific user experiences that adapt to immediate needs.
Agentic AI Reality Check
2026 is the year when agentic AI will get a reality check, as the gap between marketing promises made in 2025 and their actual competencies will become starkly visible. As enterprise adopters share the mixed successes of agentic AI, the market will begin to differentiate between true autonomous agents and the clever workflow wrappers.
Currently, many products promoted as AI agents are, in reality, rigidly programmed systems that simply follow predefined paths. They cannot independently plan or adapt in real-time to accomplish tasks. The current evolution of AI agents closely resembles the development of autonomous vehicles: early self-driving cars could only maintain lane position by relying strictly on preset instructions, and likewise, today’s AI agents are limited to executing narrowly defined tasks within established workflows. True autonomy, where AI agents can dynamically perform and solve complex problems better than humans and without human intervention, remains, for now, an aspirational goal.
AI Hallucination Goes from Crisis to Management
In 2026, the AI hallucination crisis will reach a critical juncture as organisations realise they must learn to coexist with the current fundamentally imperfect technology – until a new technology comes into play that can effectively address the issue. The focus will shift from AI hallucination ‘crisis’ to management.
As the industry deliberates who carries the liability for AI’s mistakes and inaccuracies – the tool makers or the users – enterprises will stop waiting for vendors to solve the problem and take matters into their own hands. They will adopt a variety of pragmatic risk mitigation strategies – from double and triple-checking work, and enforcing human oversight for high-stakes decisions, to taking hallucination insurance policies.
Major model builders acknowledge that current foundational LLM technology cannot eliminate hallucinations and ambiguity through incremental improvements alone. New technology is needed. Until then, and perhaps with the realisation that a technological breakthrough is years away, users will start driving the hallucination conversation – both by building systematic defenses within how they use AI, and forcing vendors to accept shared responsibility through better documentation and clearer model limitations.
The Next Evolution in AI Data Architecture Lies in a Shift from “What” to “Why”
There will be a fundamental shift in how data is structured for AI systems, driven by the limitations of current approaches in answering complex questions. While Retrieval Augmented Generation (RAG) has proven effective at locating information and answering “what” questions, it struggles with the deeper “why” and “how” inquiries.
This limitation stems from RAG’s flat-file architecture, which excels at locating information but fails to capture the complex interconnections and relationships that underpin meaningful understanding and knowledge, especially in specialised domains like legal and professional services information.
The solution lies in AI-driven autonomous structuring of data. These systems will be better placed (than humans) to reveal critical relationships across multiple data points at scale, also highlighting the contextual dependencies essential for answering the “why” and “how” questions effectively.
Consequently, in 2026, with machines taking the lead, the method of structuring data will undergo a complete transformation, gradually eliminating the human role in creating structure, to reveal the business-critical interconnections across multiple data points.
Middleware AI Apps Squeeze
Given the essential link between data and AI, middleware companies that specialise in building custom applications layered on top of data platforms will begin to get pushed to the margins, forced to compete on niche features – while the core value of data and insight is captured by the platform owners. The true leaders will be those organisations that both own and manage their data, while also offering an AI-powered interface that enables users to interact with their data securely and efficiently, fully leveraging the capabilities of modern AI technology.
Shift to AI-generated, Task-Oriented User Interfaces
In 2026, the current traditional vendor-designed, standard AI chat-based user interfaces will transition to dynamically AI-generated task-specific user interfaces that adapt to users’ immediate needs. This represents a fundamental shift from standardised software – for example, where everyone uses identical Microsoft Word or SharePoint interfaces – to personalised, short-term user interfaces that exist only as long as the user requires them for a specific task.
This transformation will also address the critical pain point that users typically have – i.e, the crushing cognitive load of navigating bloated, feature-rich software. Instead of searching through endless menus in an overstuffed application like Excel, the user will simply state their goal – “Compare the Q3 and Q4 sales figures for our top 5 products and show me a chart” – and the AI will instantly generate a temporary, purpose-built interface – a “micro-app” – solely designed for that one single task.
In the context of dynamically generated user interfaces, both data storage and the creation of bespoke interfaces will be managed by AI. The AI organisations that will truly lead in providing such bespoke user interface-generating capability are those that possess and control their own data.
About iManage
iManage is dedicated to Making Knowledge Work™. Our cloud-native platform is at the centre of the knowledge economy, enabling every organisation to work more productively, collaboratively, and securely. Built on more than 20 years of industry experience, iManage helps leading organisations manage documents and emails more efficiently, protect vital information assets, and leverage knowledge to drive better business outcomes. As your strategic business partner, we employ our award-winning AI-enabled technology, an extensive partner ecosystem, and a customer-centric approach to provide support and guidance you can trust to make knowledge work for you. iManage is relied on by more than one million professionals at 4,000 organisations around the world.
Driving Business Transformation Through Cloud & AI
Microsoft’s Shruti Harish, Head of Solution Engineering for Cloud and AI Platforms across the tech giant’s Manufacturing and Mobility vertical, talks to Interface about how to achieve successful AI implementations augmented by Cloud. Our future focused fireside chat covered everything from driving value through cloud modernisation to responsible AI.
“Leaders should align AI initiatives with clear business outcomes and foster a culture that embraces change. The focus is shifting toward AI-operated, human-led models where intelligent agents handle tasks and humans guide strategy.”
Virgin Media O2: Democratising Data as a Cultural Movement
Mauro Flores, EVP for Data Democratisation at Virgin Media O2, talks to Interface about the leading telco’s data journey and how it is supporting colleagues to innovate faster, make smarter decisions and deliver brilliant customer experiences.
“Data-driven insights are essential. They’re helping power our decisions like optimising our network performance, anticipating outages before they happen, identifying and preventing fraud, personalising offers and pricing to build customer loyalty, and forecasting demand so we invest in the right things.”
CIBC Caribbean: Shaping the future of Banking in the Caribbean
Deputy CIO Trevor Wood explains how CIBC Caribbean is blending technology, culture, and customer-centricity to deliver seamless digital experiences across the region with a ‘Future Faster’ strategy.
“We want to lead in every market we operate, build maturity across our practices and be architects of a smarter financial future for all.”
And read on for deep AI insights from ANS’s CIO on why AI isn’t just for big business, Emergn’s CTO on how your business can get AI-ready and Kore.ai’s Chief Strategy Officer on taming AI-sprawl with governance-first platforms.
We also hear from Celonis, Snowflake, ServiceNow, Make and Zoom with their tech predictions for 2026 and chart the key dates for your diary with global networking opportunities at the latest tech events and conferences across the globe.
Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2025, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation
SHARE THIS STORY
Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast.
Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly.
Game-Changing Innovation
Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.
To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer.
Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:
Enhancing customer experience
Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand.
Powering day-to-day procedures
One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise.
Minimising waste
AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources.
According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory.
For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial.
Laying down the foundations
Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out.
Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts.
SMB leaders looking to implement AI first need to ask the following:
What can AI do for me?
Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout.
Can my business manage its data?
AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way.
What about regulation?
The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible.
Embracing Innovation
This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind.
The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget.
By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape.
About ANS
ANS is a digital transformation provider and Microsoft’s UK Services Partner of the Year 2025. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations. With a strong commitment to community, diversity, and inclusion, ANS aims to empower local talent and contribute to the growth of the Northwest tech ecosystem. Understanding customers’ needs is at the heart of ANS’s approach, setting them apart from any other company in the industry.
The ANS Academy is rated outstanding by Ofsted and offers in-house apprenticeships across a range of technology disciplines. ANS has supported more than 250 apprentices to gain qualifications in the last decade via apprenticeships across technology, commercial, finance, business administration and marketing.
ANS owns and operates five IL3‐accredited data centres in Manchester and has an ecosystem of tech partners including Microsoft (Gold Partner), AWS, VMWare, Citrix, HPE, Dell, Commvault and Cisco. It is one of the very few organisations to have received all six of Microsoft’s Solutions Partner Designations.
The Financial Transformation Summit (FTS), presented by MoneyNext, took place June 18-19 2025 at London’s ExCeL Centre, Royal Victoria Dock. With over 2,000 attendees, 300+ speakers, and 400 roundtables, it stood out as one of the most immersive and interactive events in the financial services calendar.
SHARE THIS STORY
FinTech Strategy hit the conference floor at the heart of the action delivering insights from experts across Banking, Insurance, Wealth, and Lending at Financial Transformation Summit (FTS).
Financial Transformation Summit attendees from banking, insurance, wealth, lending, fintech, consultancy, and regulatory sectors convened for two days packed with keynotes, panel talks, immersive demos, and networking among 60+ exhibitors and startups.
Co-located streams – Banking, Insurance, Wealth, and Lending part of themed zones– meant that ticket-holders could explore adjacent sectors fluidly across a guiding theme: culture, collaboration, and customer centricity driving tech adoption and transformation.
Programme Highlights
Keynotes & Panels
1. Data Silos & Cross‑Institutional Collaboration
A panel featuring senior leaders from EVLO, Aon, Schroders, and Brit Insurance tackled how institutions – despite collectively spending over $33 billion annually on data – still struggle to collaborate due to privacy concerns and regulation. Innovative solutions included federated learning, anonymised client IDs and consent-backed APIs.
2. Digital Insurance via Wallets
Anna Bojic (Miss Moneypenny Technologies) unveiled a fresh take on insurance – embedding policy and claim data into Apple/Google Wallets. The idea: dynamic customer interaction directly from smartphone wallets, enhancing real‑time engagement and retention.
3. ESG Economics & Market Reality
Marc Kahn (Investec) challenged ESG orthodoxy, urging firms to emphasise human and planetary wellbeing – beyond purely financial returns – to capture stakeholder trust and sustainable growth.
4. People & Psychological Safety
Kirsty Watson (Aberdeen Group) and Vikki Allgood (Fidelity International) underlined that technological investments are futile without organisational design and psychological safety. Allgood cited a McKinsey study revealing only 26% of leaders build teams with a sense of safety – a critical step toward innovation.
5. Human‑Centred AI
Monica Kalia (Planda AI) championed AI that models individual financial contexts – recognising diversity within demographic cohorts and personalizing services accordingly.
Roundtable Experiences at FTS
At the event’s heart were the TableTalk roundtables – 400+ small-group sessions, each led by a subject-matter expert. These were limited to six participants each, enabling deep, peer-led discussions on themes like:
AI in risk and compliance
Open banking integration
ESG data standards
Cyber resilience
Change management and culture adaptation
Attendees consistently praised their interactive nature – far removed from the stage‑focused “listening” format often critiqued at other conferences.
Demonstrations & Exhibitor Showcase
Over 60 exhibitors presented tech-driven innovations: Generative AI, open‑banking APIs, ESG reporting tools, embedded finance solutions, and more. A few standouts were:
CRIF highlighted AI-powered credit scoring with ESG overlays – promising dynamic risk assessments backed by sustainability data
Emerging FinTechs demoing AI compliance engines, digital wallet insurance packaging, and data-sharing platforms
Hylanddemonstrated the intuitive end-user experience of its Hyland Content Innovation Cloud™ and showed how easy it is to configure, tailor and deploy solutions that can empower key stakeholders across any business
The demo zone allowed engaging, hands-on exploration and real-time Q&As; it complemented the content with practical insights.
Standout Themes & Strategic Insights
1. Tech is Not Enough Without Culture
Recurrent messaging emphasised that culture, trust, governance, and psychological safety are foundational – not secondary – to digital initiatives. Technology alone won’t deliver transformation without a people-first mindset.
2. Cross‑Sector Data Collaboration
Despite heavy investment, institutions still operate in silos. Shared, secure infrastructure and regulatory-aligned frameworks are being prototyped, but broad adoption remains a work in progress.
3. AI-as-a-Personalisation Backbone
AI is shifting from automation to empathy. Organisations showcased tools to hyper-personalise offers yet maintain privacy and inclusion – moving beyond outdated demographic frameworks into genuine behavioural understanding.
4. Embedded Finance & Digital Wallets
Insurance via wallet applications and embedded finance models point to seamless customer journeys – less app hopping, more value delivered at the point of need.
5. Rebalancing ESG & Profit Metrics
Speakers emphasised integrating ESG factors into performance metrics – not just for compliance, but as an operative advantage anchored in long-term stability and stakeholder trust.
Who Should Attend FTS Next Year?
Ideal for:
Transformation and change leaders
CTOs, CIOs, and Heads of Innovation
Data and AI strategists
Operational and HR leaders focused on culture
FinTech innovators and solution providers
If you’re crafting digital transformation strategies, an attuned leader in financial services, or a consultant embedding tech in legacy environments, this summit provides rich, actionable content.
Expect next year’s event to build on this foundation:
More AI-specific tracks, possibly Generative AI streams
ESG deep-dives with case studies on implementation
Expanded regulator involvement around data governance and cross-border compliance
FTS: Final Verdict
Overall, the FTS 2025 delivered on its brand promise:
Interactive and inclusive: 400 roundtables empowered voices across levels.
Cross‑sector learning: Banking, Insurance, Wealth, and Lending streams offered both breadth and depth.
Insightful keynotes: Big ideas on AI, ESG, data-sharing, and culture were well-explored.
Real-world relevance: Exhibitor demos connected theory with practice.
Networking with purpose: Opportunities to engage, learn, and collaborate were abundant.
The Financial Transformation Summit struck a compelling balance between big-picture vision and granular, execution-level insight. It emphasised that while technology enables; culture, customer centricity and collaboration drive real progress. The format – with its roundtables, demos, and keynotes – offered a dynamic platform for knowledge exchange.
If you attended, chances are you left with practical next steps. If you didn’t, you missed one of the most interactive, future-focused events shaping financial services transformation today.
FinTech Strategy meets Eastern Horizon Founder & CEO Christine Le to discuss client expectations and the changing landscape of wealth management
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At Financial Transformation Summit, Christine Le, a Chartered Financial Planner and Founder & CEO of Eastern Horizon Wealth Management, spoke on an investment panel – “Generational Wealth Transfer: Meeting the Expectation of Younger Clients”. Appearing with industry colleagued representing Citi Global Wealth, HFMC Wealth and Lightbox Wealth, Le considered: What trends and technologies are shaping NextGen investment decisions, and how can WMs stay ahead? Can digital wealth platforms meet the demand for hyper-personalised, user-friendly experiences? How does social responsibility & ESG investing influence younger investors, and how can advisors align with these priorities? How can wealth managers build and maintain trust with NextGen investors?
Following the panel, we spoke with Christine to find out more…
Hi Christine, tell us about your role at Eastern Horizon?
“I’m a Chartered Financial Planner and the Founder & CEO of Eastern Horizon Wealth Management. We are a financial advisory firm and also a partner practice of St. James’s Place. They are among the biggest wealth management firms in the UK based on assets under management. We get a lot of support from St. James’s Place in terms of technology compliance and investment solutions. At my practice, we focus on a diverse range of clients including ethnic minorities, especially British Asians in the UK. I’m also the president of the Vietnam Investment and Finance Association in the United Kingdom (VIFA). We aim to provide useful financial information for Vietnamese people in the UK and become a bridge between Vietnam and the UK.”
You were part of a panel at this Summit focused on Generational Wealth Transfer. Can you give us an overview of your thoughts?
‘’Having worked in the financial services industry for over 15 years, I’ve observed a persistent gap in how the industry serves diverse client segments – particularly ethnic minority communities in the UK. This gap is especially pronounced when it comes to financial education and long-term planning, including wealth transfer across generations. When I speak to members of my own Vietnamese community, I often find that there’s a limited understanding of how to navigate financial systems effectively – from managing investments and pensions to planning for intergenerational wealth. It’s not due to a lack of interest or ambition, but rather a lack of access to culturally relevant and accessible financial advice.
“This is where I believe I can make a meaningful difference. I not only bring professional expertise and technical knowledge to the table, but also a deep understanding of the cultural values, family dynamics, and communication styles that shape financial decision-making in the community. That cultural insight is key to building trust, something that is essential when discussing personal finances and planning for the future. My goal is to help bridge that gap – to empower families with the knowledge and tools they need to make informed financial decisions, preserve their wealth, and pass it on confidently to the next generation.’’
Why is this an exciting time for the business?
“At the moment the world is so integrated, and many people can benefit. A lot of people want to go to the UK, invest into the UK. I think with that in mind this is an exciting time to run my business and to be able to bridge that gap, providing sufficient knowledge for people as a trusted source when they come to the UK and need to understand the financial regulations. We can give people solid support to understand the financial processes of settling and building wealth in the UK.”
What other trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“Right now, everyone is talking about AI, and for good reason. In my business, we rely heavily on digital tools to streamline administrative tasks. It’s truly a game-changer. Compared to starting a business 15 years ago, when I would have needed a full-time assistant just to take meeting notes and summarise action points, many of those processes can now be automated, saving both time and cost. Another advantage is in how we communicate. Many of my clients are British Vietnamese. While they understand and speak English, they often feel more comfortable communicating in Vietnamese. We use AI-powered translation tools to make this process faster and more seamless. These technologies are allowing us to broaden the range of services we offer and tailor our support to each client’s needs.”
What pain points are your clients experiencing that you need to address? How are you meeting the challenge?
“It’s about meeting the client’s highest priority. When people come to me, they maybe want to support their children to get onto the property ladder or plan for their retirement. They might be looking to buy a new car or move home. So, as a regulated financial advisor, I can sit with a client and talk them through key priorities and tailor the solutions best for them and help them overcome the pain points of decision-making.
“Additionally, the UK’s financial regulations are complex and changing all the time. It’s very difficult for people to follow. It’s my job as a financial advisor to follow up those changes and stay up to date with the regulations to assess how it can impact our clients and then give them the best recommendations. Allied to this, many of our clients will need support with cross-border services as they move freely between different countries they need somebody they can trust, an expert that knows what they’re doing and who can provide the right financial services for them.”
Tell us about a recent success story…
“Success for Eastern Horizon is to know that our clients feel they have somebody to rely on. For example, I have an old friend who came to me as a client. She was based in Vietnam but wanted to relocate to the UK. She had assets across Europe and in Vietnam and needed to understand the big picture of financial planning in the UK. We examined her assets across different countries to bring them into the UK and find the best solution for her to utilise tax efficient savings, pensions and investments to support her family and her business in the long term.”
What’s next for Eastern Horizon when it comes to wealth management? What future launches and initiatives are you particularly excited about?
“Over the next few months, we are keen to collaborate with different associations and communities across the UK – whether that’s related to Vietnam or British Asian communities and offer useful information and workshops and webinars tailored to different audiences. Also, with my work for the Vietnam Investment and Finance Association I want to organise workshops for those keen to invest in the UK but don’t know where to start. They often don’t have anyone to support them so I would like to focus on building a network to offer that bridge to investment in the UK.”
Why do you think the evolution of collaboration between traditional institutions and FinTechs is set to continue? What are you excited about?
“I spent five years working at the intersection of FinTech and WealthTech – where wealth management meets technology. During that time, I witnessed firsthand how the financial services landscape is evolving. Large incumbent banks bring undeniable strengths: scale, regulatory rigour, and long-standing client trust. However, they often struggle with agility. Their legacy infrastructures, many of which still aren’t cloud-based, make digital transformation slow and complex. On the other hand, FinTechs are born digital. They’re nimble, innovative, and quick to adapt to changing customer needs. But without the reputation and stability that traditional institutions have built over decades, they can face challenges in gaining consumer trust or navigating regulatory environments alone. What became clear to me is that banks and FinTechs cannot operate in silos.
“Collaboration is not just beneficial, it’s essential. When they work together, they combine the best of both worlds: the reliability and compliance of traditional finance with the innovation and customer-centric design of new technology. With my own practice, we apply this mindset. We actively look for ways to streamline administrative processes using digital tools – reducing costs, improving efficiency, and freeing up more time to focus on what matters most: building strong, human relationships with our clients. The goal is to use technology not to replace that human connection, but to enhance it. By doing so, we can deliver modern, efficient, and deeply personalised financial services that clients trust.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Eastern Horizon?
“I’ve attended several events this year, and this has truly been one of the most enjoyable and well-organised in the UK. What stood out was the impressive mix of voices – from established financial institutions to bold, forward-thinking startups. Engaging with such a diverse group of speakers has been both insightful and thought-provoking. I’ve come away with fresh perspectives, challenged some of my own assumptions, and found new ideas to explore as we continue building meaningful partnerships for Eastern Horizon Wealth Management.”
About Christine Le and Eastern Horizon Wealth Management
As an Appointed Representative of St. James’s Place, Practice Lead, and business owner, Christine leverages over 15 years of experience in financial services and wealth tech to serve our clients, acquired through extensive work in multinational financial services firms in the UK. This rich background has equipped Christine with the skills and knowledge necessary to effectively oversee the business, ensuring that every facet is managed with the highest level of professionalism.
Christine founded and built this Practice to help clients prosper, build financial security, and attain peace of mind while overcoming financial obstacles.
Her primary focus is on nurturing enduring relationships with her clients, offering them trusted guidance as their financial requirements evolve over time. Throughout her advisory process, clarity remains paramount. By closely collaborating with her clients, Christine strives to identify the most efficient and tax-effective strategies to help them achieve their objectives. Specialising in tailored solutions, Christine is dedicated to understanding her clients’ financial goals and crafting strategies that align with their vision for the future.
FinTech Strategy meets with Citigroup’s Head of ESG Credit Management, Mauricio Masondo, to discover the future for ESG and sustainable finance
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At Financial Transformation Summit, Mauricio Masondo, Head of ESG Credit Management at Citigroup, featured on a sustainability panel – ‘The Future of ESG and Sustainable Finance: Balancing Profit and Purpose’. Alongside peers fromGenerali AM, Gallagher Re and Arma Karma, Masondo considered: What key metrics should FIs use to track ESG progress, and how can they ensure authenticity in their sustainability efforts? Developing a holistic ESG strategy amid evolving regulations – key challenges and solutions. How can FIs leverage technology to meet sustainability goals and drive long-term profitability? How can FIs move beyond offering ESG products to embedding sustainability into their core business models?
Following the panel, we spoke with Mauricio to find out more…
Hi Mauricio, tell us about your role at Citigroup?
“In my 32 years with Citi my career has primarily focused on wholesale credit, and in recent years I built out our portfolio management function. For the past year specifically, I’ve been leading the integration of ESG and climate considerations into our credit processes.As Head of ESG Credit Management, my role is to embed ESG requirements into our credit processes in a way that’s consistently and efficiently applied through technology, policies, training, and governance frameworks. Our strategic approach was not to create an ESG silo that replicates existing processes, but rather to integrate ESG considerations seamlessly into our current workflows. This means any credit analyst can now underwrite ESG credits, sustainable loans, or green loans, rather than requiring dedicated specialists. We’ve equipped our entire team with the knowledge and tools they need to handle these transactions effectively.”
You were part of a panel at this Summit focused on the future for ESG and sustainable finance. Can you give us an overview of your thoughts?
“Data standardisation is absolutely critical, especially as we advance into the AI era. I often reference Moody’s as an excellent example of strategic foresight. Moody’s operates two key businesses – credit ratings and data analytics – and early in their AI journey, they made the strategic decision to structure and normalise all their credit research data. This proved to be transformational because it enabled them to deploy AI solutions much more rapidly with clean, structured datasets. We’re working to apply this same principle at Citi. We’re developing processes to structure climate-related data in a way that will be usable across multiple applications. For example, we’re working on integrating emissions data and climate risk assessments into our credit risk rating models. We’re also exploring how this structured approach could support underwriting processes and securitisations, where comprehensive data packages could facilitate risk transfer transactions with institutional investors. The goal is to build normalised, structured data as the foundation for various applications, from portfolio management to AI-driven solutions. While we’re still in the early stages of many of these initiatives, the potential is significant.”
Why is this an exciting time for the business?
“We’re witnessing the convergence of several transformative trends. However, one of our biggest challenges is policy divergence across jurisdictions. Countries are taking vastly different approaches to ESG requirements, and for a global bank like Citi, this creates significant complexity in standardising processes across multiple regulatory environments. While challenging, this divergence also creates opportunities to develop scalable, cost-effective solutions that can adapt to various regulatory frameworks.Second, AI is revolutionising how we approach ESG challenges. It’s helping us structure data more effectively, enhance reporting capabilities, contextualise information, and identify trends that would have been impossible to detect manually.
“Previously, comprehensive ESG analysis required significant time, resources, and personnel. AI has made these processes more accessible and cost-effective.Most importantly, there’s been a fundamental shift in how the industry, and governments, view ESG. It’s evolved beyond compliance and emissions reporting to become a significant business opportunity. We need to capitalise on this transition – moving from reactive reporting to proactive opportunity capture. The capital is there, and if traditional banks don’t seize these opportunities, asset managers, private credit firms, and private equity will. We’re partnering strategically with reinsurance companies and asset managers to develop innovative solutions that unlock transition capital and help companies fund decarbonisation projects.”
What other trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“Trade flows are experiencing significant disruption due to current tariff policies. This creates both challenges and opportunities for our clients. Companies are reassessing their supply chain vulnerabilities and seeking greater resilience in their operations.I anticipate we’ll see a regionalisation of trade flows rather than a complete deglobalisation. European companies will likely increase intra-regional trade while reducing intercontinental transactions. We’re seeing similar patterns emerging in Asia and the Middle East. This shift requires banks to be more agile in how we structure trade finance and working capital solutions to meet these evolving needs.”
What pain points are you experiencing that you need to address? How are you meeting the challenge?
“Working capital finance requires increasingly creative solutions that leverage advanced technology. Banks are recognising that FinTechs often have greater agility in developing and implementing these technologies. There’s significant efficiency in having one FinTech serve multiple banks rather than each institution developing independent solutions. This collaborative approach allows us to move faster while reducing development costs and time-to-market.”
Tell us about a recent success story…
“I designed and led the implementation of an early warning monitoring system for Citi’s credit portfolio. The project began with a fundamental concept: create a data lake, develop meaningful metrics, and engage data scientists to interpret the insights. We collaborated with trade officers and partnered with external specialists to enhance our capabilities.Initially, there was scepticism about the system’s value, particularly because we built it as an independent function within our portfolio management organisation, separate from traditional banking and risk management structures. However, this positioning allowed us to collect unique client data and develop insights that weren’t available elsewhere in the organisation.A critical component of our success was establishing a dedicated credit expert team that oversees the entire process.
“This team leads the engagement and communication of alerts, ensuring that insights are properly interpreted and actionable recommendations reach the right stakeholders.The evolution was remarkable. We progressed from generating a few alerts daily to dozens per day, and eventually to hundreds of alerts weekly. More importantly, we developed sophisticated processes for interpreting and acting on these alerts, with our expert team serving as the bridge between data insights and business action. Bankers and risk managers began to recognise the value, and today, three years later, the system is integral to how we conduct annual reviews and client presentations. It’s incredibly rewarding to provide our bankers with comprehensive data and insights that strengthen their client relationships.”
What’s next for Citigroup when it comes to ESG? What future launches and initiatives are you particularly excited about?
“While it may sound clichéd, AI truly is transformative for our industry. The breadth of use cases and the rapid pace of learning make it essential to our strategic direction. We’ve established a strategic partnership with Google and are investing significantly in AI use case development and implementation across our operations. From an operational perspective, AI will undoubtedly increase our efficiency as an industry. More importantly, it’s enabling us to evolve our business models and create client solutions that weren’t previously feasible. This opens entirely new avenues for innovative product development. Additionally, since CEO Jane Fraser joined, we’ve embarked on a comprehensive transformation program that’s delivering strong results in terms of financial performance and returns. We’ve restructured and simplified our operations, which positions us more competitively as we refresh our leadership teams and attract new talent. The trajectory is very promising.”
Why do you think the evolution of collaboration between banks and FinTechs is set to continue? What are you excited about?
“The current tariff environment is creating opportunities for FinTechs that facilitate connections between banks, investors, and corporations. It’s also presenting consolidation opportunities for private equity firms within the rapidly expanding FinTech ecosystem.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Citigroup?
“The panel brought together diverse perspectives from FinTech, asset management, insurance, and banking – all addressing common challenges that span our sectors. This cross-industry dialogue creates tremendous opportunities for collaboration and mutual understanding.The key now is translating these conversations into action. We need to maintain these connections, expand the dialogue, and avoid making decisions in isolation. FinTechs possess the agility to implement changes in their operating models far more quickly than large incumbents like us. However, our procurement systems and processes aren’t always conducive to collaborating with smaller, innovative companies.Events like this highlight the need to streamline how institutions like Citi can collaborate with and learn from FinTechs. We must accelerate our ability to adapt to a rapidly changing world.”
We’re helping build more sustainable, economically vibrant communities around the world.
At Citi, helping our clients navigate the challenges and embrace the opportunities of our rapidly changing world is fundamental to our mission of enabling growth and economic progress.
FinTech Strategy meets Vikki Allgood, Director of Technology Strategy at Fidelity, to discuss the fundamental importance of culture in driving a successful business transformation
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At Financial Transformation Summit, Vikki Allgood, Director of Technology Strategy at Fidelity International, gave a keynote speech entitled ‘Psychological Safety – The Hidden Key to Transforming Your Business’. Following her appearance, we spoke to Vikki to learn more…
Hi Vikki, tell us about your role at Fidelity?
“I am Director of Technology Strategy for Fidelity. We’re looking at how we can ensure we can adapt our response to our business’ needs through our technology to meet whatever demand is coming over the horizons tomorrow. And in the years to come.”
You spoke at this Summit about psychological safety driving business transformation. Tell us more…
“At Fidelity, our strategy for our technology has culture as our foundational pillar. Talking with our leaders over the last 18 months, we looked to understand how we can create a brilliant culture, recognising that psychological safety is a fundamental element in that.
“Transformations often stumble because the business plan forgets its most volatile, and most valuable component, the people asked to deliver it. Without psychological safety, even well‑funded and organised programmes stall. Teams focus more on protecting themselves instead of challenging ideas. That’s when the risks remain hidden until it’s costly, and the collective new ideas to solve the biggest challenges are never formed. That’s why we ask leaders to invest time and energy in building a culture where it’s safe to question, experiment, challenge the status quo and admit what’s not working. In that environment the behaviours every transformation depends on (curiosity, creativity, problem‑solving, healthy challenge) all naturally emerge.
Psychological safety isn’t some new trendy HR slogan, it’s a timeless basic human need wired into our biology through millennia of evolution. When people sense social threat, the amygdala floods the body with cortisol and the prefrontal cortex (the part of our brain we rely on for reasoning, innovation, etc.) literally dims. Remove the threat, and the brain’s chemistry flips, dopamine and oxytocin rise, and teams move from cautious compliance to bold collaboration. Leaders must ask themselves if their teams can lean in and challenge effectively or if they are staying quiet to protect themselves. The hidden key is simple, but non‑negotiable, leaders must consciously, relentlessly and courageously build psychological safety through everything they do and say. If they do that, then your technology and transformation plans will have the human engine they need to succeed.”
Why is this an exciting time for Fidelity?
“I think that within the industry, all the opportunities that are coming along, and our ability to adapt to our customers’ needs, is what makes it exciting. We are all on an exponential curve of change. Technical possibilities, customer expectations, regulatory demand, industry landscapes, are all going to keep moving, with new challenges and opportunities presenting themselves. We are ensuring that we can meet those needs of our customers both today and tomorrow. Finding new ways to do that is pretty exciting.”
What trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“So, from a technology perspective, I would say that we are making sure that all our foundational elements are there so that we can respond and adapt. One of Fidelity’s differentiators is that we have historic long running relationships with our customers. We are reintegrating our data strategy to allow us to better leverage this, in addition to market data, allowing us to provide personalised solutions to our customers.
“AI is absolutely generating a buzz for us right now as well, and not just Generative AI. We’re seeing a push towards Agentic AI and how we can look to provide faster, quicker, more cost-effective services for our business partners who can then provide better outcomes for our customers. This in combination with our long-standing history gives us a unique opportunity.”
What pain points are your customers experiencing that you need to address? What are they asking you for help with? How are you meeting the challenge?
“We need to understand the new generations entering the wealth space and what their expectations are and how they engage with us. We’re looking to ensure we can keep pace with their demands. For example, we’ve just launched Pay by Bank allowing our customers to pay money into their accounts in a faster more secure way. This feature leverages the Open Banking Technology that is now available to financial institutions.”
Tell us about a recent success story for Fidelity…
“Across the technology landscape, we have been amplifying our existing cloud strategy by removing complexity in our hybrid setup, reducing the number of dependencies back to on-premises. This is a well-known challenge for financial institutions who have regulatory reasons to have highly confidential systems in house. This will allow us to respond at pace to what customers need. Looking a couple of years down the line nobody can be sure what the next big opportunities are going to be, so ensuring we’re building that foundation to respond to what comes over the horizon is fundamental.”
What’s next for Fidelity? What future launches and initiatives are you particularly excited about?
“Security is incredibly important to us. With that in mind, we are exploring Quantum to understand both the opportunities and risks that it could present in the future and how we can stay at the forefront of it. Ensuring a secure and reliable service for our customers is an absolute non-negotiable part of our strategy.”
Why do you think the evolution of collaboration between banks and FinTechs is set to continue? What are you excited about?
“I think the reality is that we need the collective mindsets to come together to create the best outcomes. We’re never going to have all the answers all by ourselves. So, starting to engage and work with people and collaborate means that we get to have a better, wider perspective. Coming to events like this, we get to learn, understand what other industries are doing, what other areas are looking at, and it helps to widen our perspectives and have more opportunities to find those out of the box ideas that are going to then help our customers.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Fidelity?
“I was particularly keen to attend this conference because I think transformation and how we can do this successfully is so important at the moment. The reality is, sadly, and I covered this in my talk, a staggeringly large number of transformations miss the mark or fall short. And so, learning and embracing how you can ensure that you go after it and you get the value that you’re aiming for, that is for me what’s important. As I said, getting that learning, talking to each other, understanding what’s worked, what hasn’t worked and sharing tips and techniques is actually incredibly powerful and something you can then take back and use at your organisation.”
It has been more than 50 years since we were founded. We’ve seen many market cycles – bull and bear, boom and bust. We have stayed the course through different investment environments regardless of market performance.
The needs of our customers have always steered our decisions, which is why we’ve stuck to our core activity of investing. We believe this is what allows us to excel – and, even more importantly, to repay the trust placed in us by our customers.
Whether you’re investing for the first time, or have a wealth of experience, it’s essential to be informed and to be comfortable with your decisions. Through Trustpilot, you can read up-to-the-minute, real-world reviews and see for yourself how Fidelity aims to put the customer first and make investing a bit easier.
Our do-it-yourself online services give you 24/7 access to our investment guidance, handy tools, and range of accounts from your computer, tablet or phone. Transfer your existing investments to us, or open a new account online and begin investing in just a few steps.
FinTech Strategy speaks with Matt Bazley, Account Executive at Hyland, to explore how the content intelligence and process automation specialists are helping to drive operational efficiencies for their financial services clients
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
Hyland empowers organisations with unified content, process and applications intelligence solutions, unlocking the profound insights that fuel innovation. The Hyland team was at Financial Transformation Summit to reveal the ways organisations can transform their processes with the Hyland Content Innovation Cloud™. By combining AI-powered automation with built-in integrations to productivity tools and business applications, Hyland streamlines workflows across multiple channels, accelerating response times, boosting productivity and improving customer satisfaction.
At the event, Neil Rayment, Sales Solution Engineer, demonstrated the intuitive end-user experience and showed how easy it is to configure, tailor and deploy solutions that can empower key stakeholders across any business. We spoke to Hyland’s Matt Bazley, Account Executive for Financial Services, to find out more…
Hi Matt, tell us about your role at Hyland?
“I’m the Account Executive responsible for banking across the UK and Ireland. I’ve been with the company for just over 18 months. Across my career, I’ve been helping financial services institutions for over 15 years with digital transformations and various programmes.”
What are the key digital transformation solutions Hyland offers Financial Services organisations? How are they making a difference? What are some of the use cases you’re exploring?
“Hyland is at the cutting edge of the content space. We have what we call our Content Innovation Cloud, which is delivering content intelligence, process intelligence and application intelligence. What that means in reality is that we’re helping organisations get access to their content that they don’t currently have access to because it’s spread over many siloed systems and sat in an unstructured format. So, with our content and intelligence, we’re able to get access to that unstructured data, which is around about 80% of an organisation’s data in the financial services sector. And we’re able to then provide knowledge and insight on that content, which helps organisations to make better strategic decisions. Allied to that, with this process intelligence, we’re able to help automate processes across the business. Whether it be orchestrating use cases and workflows or integrating with other systems to deliver application intelligence, we’re able to manage that whole end-to-end life cycle of information across an organisation.”
Why is this an exciting time for the business?
“We’re excited because our strategy is really leading the way. We’re leveraging large language models (LLMs) and AI to be able to deliver these real-life use cases that solve actual challenges. A lot of the time AI projects fail because businesses are trying to implement AI that isn’t actually a solution solving a problem. Whereas the AI we’re using is to actually solve a real-life challenge that businesses face because they want to be hyper-personalised for customers and more customer-centric. And you can’t really do that if you’re only leveraging 20% of the data you hold about your customers. And that’s why getting access and insight around this unstructured data is really vital for financial services organisations right now. We are able to help them leverage that unstructured data and meet them where their data is at. So, it’s not a case of having to migrate all of that data into different platforms or into our platform. We confederate across your information wherever it’s held as a financial services organisation; and that’s really a game-changing position for us and for the industry.”
What trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“AI is the big one. Although it is a bit of a buzzword that everyone’s mentioning nowadays, we’re actually delivering AI solutions to solve problems that businesses face. And that’s one of the real trends in the industries. Most AI projects fail, and companies want AI projects that succeed and deliver real value. The other thing we’re seeing is the rise of hyper-personalisation as part of being really customer-focused and customer-centric. Again, by helping businesses leverage that 80% of information around their customers that they don’t currently have access to, and provide insights on that information, we’re helping those organisations to become really specific and personalised in their dealings with their customers.
“The final piece is around data and governance. So, security around our data as customers, because we’re all consumers at heart and want to know that our information is secure. Using best-in-class processes around security and governance is what we’re really focused on. And that’s a real trend in the market as well. We’re making sure that while we’re leveraging that information about customers, we’re keeping it safe and only using it for what it’s intended for and making sure the processes and governance around that information are really robust.”
What other pain points are clients in the FS space experiencing that you need to address? What are they asking you for help with? How are you meeting the challenge?
“The one big one is the siloed information across multiple systems as part of digital transformation strategies. Over the years, I’ve seen many businesses implement point solutions. They might be best-in-class point solutions… But that means you end up with information and data and processes across 10, 15 or 20 systems. How do you then unify that data and leverage it to make the user journeys more effective? And also the customer journeys better, whatever channel those customers are using?
“What we see is that while trying to be omnichannel for their customers, organisations end up with multiple solutions. One for their mobile app, a solution for their website, a solution for in-branch banking… So, you end up with omnichannel processes that are actually siloed processes. What we are trying to help businesses do is to unify those processes. We can break down those silos and make it a really seamless, integrated journey internally and externally for colleagues and customers.”
Tell us about a recent success story …
“A great example is our work with ABN AMRO – a bank that is one of our longstanding and valued customers. They were looking for a solution because of this very challenge. The bank had multiple siloed systems holding a lot of information and a very complex architecture. They went to market and Hyland was able to prove our solution was able to manage the sheer volume and complexity of the information and content that they had. And most importantly we were able to help them integrate with their line-of-business systems very easily to create that seamless internal/external journey for both users and customers.”
What’s next for Hyland? What future launches and initiatives are you particularly excited about?
“It’s all about continuing to grow for us. With the Content Innovation Cloud, the reception we’ve received from the market, from our customers, has been absolutely tremendous. Businesses are so excited to see the ability and capability of what we’re able to do. And what we’re able to deliver for them in terms of real value through the Content Innovation Cloud. We’ve got customers onboarded already. It’s now about expanding that list of customers who are going to see real value from leveraging the cloud, our AI solutions and driving efficiencies with our content process and application intelligence across their businesses.”
Why do you think the evolution of collaboration between banks and FinTechs is set to continue? What are you excited about?
“Across the market over the last 15-20 years the banks are starting to see FinTechs more as allies than competitors. And they’re leveraging these technologies rather than trying to challenge them. I think that’s going to continue because FinTechs are far more agile. And as customer expectations continue to evolve and become more demanding, banks need to evolve and deal with these demands more effectively and more fluidly. And that’s why leveraging FinTechs is going to be a key differentiator over the next 10 years. That trend is going to continue where banks and FinTechs work together and collaborate rather than challenge each other.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Hyland?
“It’s my fourth year coming here with a couple of different companies and I always find this event really valuable. Not only to obviously promote our products and our brand… But to speak to key decision-makers and peers across financial services. We aim to learn from them about whether the challenges we perceive as a vendor are seen by them as a customer. We will continue to learn and evolve our business around key market challenges. Hyland can then focus our solutions around the real-world problems our peers are seeing across financial services. Coming to this event is a great way to meet as many people as possible. And just really enjoy having those meaningful conversations with leaders in the financial services sector.”
Hyland puts your content to work, making it smarter and more accessible in the moment of need.
Hyland’s content, process and application intelligence solutions empower customers to deliver exceptional experiences to those they serve. The solutions capture, process and manage high volumes of diverse content, helping you improve, accelerate and automate operational decisions and workflows.
Digital DNA – Exploring core infrastructure, platform strategies, and foundational technologies.
Embedded Intelligence – AI, machine learning, data strategies, and real-time analytics.
Beyond Fintech – Partnerships between fintechs and other sectors like retail, health, and climate.
Governance 2.0 – Regulation, digital identity, privacy, and ESG compliance.
Day three featured more impactful sessions across all four pillars, offering attendees more valuable insights and strategies for innovation.
Highlights from Key Sessions at Money20/20 Europe:
How to Create and Leverage FinBank Partnerships
The discussion focused on the evolution and success of FinTech partnerships with banks. Key points included the shift from transactional partnerships to more collaborative, value-driven relationships, emphasizing joint KPIs and product creation.
Alex Johnson, Chief Payments Officer, Nium
“You really have to differentiate. You really have to stand out for a bank to say, ‘Yeah, I like what you offer enough to go through, six months of onboarding.’ Dare I say, maybe more.”
John Power, SVP, Head of JVs & AQaaS, Fiserv
“The legacy system, it’s a fact of life. They’re there. They’re pervasive. They’re going to be here for a long time, and banks historically have made huge investments in those platforms and systems. So I think both the challenge for the for the bank and the opportunity for the FinTech is, how do you at the front end of those legacy systems develop new products that can scale and that you can bring cross border easily and readily.”
“It really is cutting the line to be able to deliver opportunity for customers and to be able to expand propositions for new customers.”
“The economic development supply chains shifting to low to middle income countries are incredibly important right now, and cross border payment rails have not been good in low middle income countries.”
Where Fintech goes Next: Tapping into Platforms and Verticals
The discussion centred on the democratisation of financial services through embedded finance. The panel emphasised the importance of data quality, personalisation, and strategic partnerships in delivering seamless financial experiences – ultimately enhancing customer satisfaction and improving business efficiency.
“Embedded finance is going to be defined by region and use cases.”
Amy Loh, Chief Marketing Officer – Pipe
“Small businesses don’t want to manage their business through a bunch of different tools that are stitched together. They’re looking to platforms to do everything for them and keep high end services.”
“Most platforms or merchants out there trying to diversify revenue, and they will get auxiliary revenue, or maybe get primary revenue through FinTech activity.”
The Neobanks Strike Back
In a dynamic exploration of neobanking’s evolution, Ali Niknam revealed bunq’s remarkable journey from a tech-driven startup to a sustainably profitable digital bank. By leveraging AI across every aspect of their operations, bunq has transformed traditional banking, reducing support times to mere seconds and creating a hyper-personalised user experience. Niknam emphasised the power of user-centricity, showing how innovative features like simple stock trading and multi-language support can democratise financial services.
The bank’s strategic approach – focusing on user needs rather than investor expectations – has enabled them to expand thoughtfully, with plans to enter the UK and US markets. By embracing technological change and maintaining a relentless commitment to solving real customer problems, bunq exemplifies the next generation of banking.
Ali Niknam, Founder & CEO, bunq
“Somewhere in the 70s, we let go of the gold standard, and now currencies are basically floating. The only reason why a dollar or a euro is worth what it’s worth is because of trust and perception. Philosophically, it’s very logical that we have found another abstraction layer by introducing stablecoin, which is not much else than a byte number that has a denomination currency as a backing asset that itself doesn’t have anything as a backing asset. A lot of people might ask, ‘Why would you need a stablecoin? We have euros. I go get a coffee, pay with Apple Pay or cash.’ But there are many countries on this planet where the local currency is not stable. If your country has an inflation rate of 30,000% like Zimbabwe, you would really love to use a different currency. The US dollar has been the currency of choice, but as a normal person, you cannot access the US dollar. A US dollar stablecoin that you can access by simply having a mobile phone – that’s going to be transformational for large groups of people.”
Innovating When Regulation Can’t Keep Up: Lessons from NASA
Lisa Valencia covered an array of topics, from her 35 year career at NASA and Guinness World Record to the rise of private entities like SpaceX, which has launched 180 missions this year, and the increasing role of public-private partnerships in space exploration. The speaker also touched on international collaborations, particularly with the European Space Agency and the Italian Space Agency, and the potential for space tourism and colonization of the moon.
Lisa Valencia, Programme Manager/Electrical Engineer – Pioneering Space, LC (ex NASA)
“Back in the day, NASA got 4% of the national budget. Now it’s down to just 0.1%, so we’ve had to get creative with private partnerships. SpaceX is the perfect success story. They came to us in 2007 needing money after some rocket mishaps, and look at them now! From my balcony, I see their launches every other day. They’re planning 180 launches this year alone.Talk about a return on investment!”
“We’re planning to colonise the South Pole on the moon. The idea is to extract water and hydrogen from the regolith—both for living there and for fuel.”
Scaling Internationally in 2025: Funding, Innovating, and Breaking into New Markets
The conversation focused on the growth and strategy of fintech companies, particularly those with a strong presence in Europe and the US. The panel featured Ingo Uytdehaage, CEO and co-founder of Adyen, and Alexandre Prot, CEO of Qonto. Both leaders expressed a preference for organic growth over acquisitions, emphasizing the importance of scaling efficiently before pursuing an IPO.
Ingo Uytdehaage, CEO and co-founder of Adyen
“I think an important part of scaling a company is not just thinking about your product, but also considering the markets you want to address, and how you ensure you become local in each country.”
“We realised over time that if we really want to bring the customers, we need to have the best licenses to operate. A banking license gives you a lot of flexibility.”
“Being independent from other companies, other financial institutions, that gives you flexibility to build what your customers really want.”
“I think it’s very important, also in Europe, that we continue to be competitive. If you think about regulations and AI, we shouldn’t try to do things completely differently compared to the US.”
Alexandre Prot, CEO of Qonto
“We need to be very strict about tech integration and avoiding legacy which slows us down.”
“We still need to scale a lot before we have a successful IPO. A few team members are working on it and getting the company ready for it. But, the most important thing is just scaling efficiently in the business, and maybe an IPO would be welcome in a couple of years.”
Putting The F in Fintech
The panel discussion focused on the role of women in FinTech based on personal experiences.
Iana Dimitrova, CEO, OpenPayd
“At times, being underestimated is helpful, because if you’re seen as the competition, driving an agenda is becoming more difficult. So what I found, actually, over a period, is that bringing your emotional intelligence, leaving the ego outside of the outside of the room, and just focusing on execution is is incredibly helpful.”
Megan Cooper, CEO & Founder, Caywood
“The moment we start defining ourselves as like a female leader or a female entrepreneur, you almost kind of put yourself in a bit of a box. And so I think just seeing yourself on an equal playing field and then operating it on an equal playing field and interacting in that way is quite advantageous.”
“We can’t just want diversity and hope it happens. We actually have to be intentional about creating it.”
Valerie Kontor, Founder, Black in Fintech
“Black women make up 1.6% over the FinTech workforce, but when we look at the financial reality of black women by the age of 60, only 53% of black women have enough money in their bank account to retire. We need to start marrying people in FinTech and the people that we need to serve.”
Money20/20 Europe 2025 closed its doors but the next edition of the conference will return to Amsterdam from June 2–4, 2026, promising to continue the tradition of shaping the future of financial services…
InsurTech Insights Europe 2025: A Transformational Gathering for the Future of Insurance
SHARE THIS STORY
InsurTech Insights Europe 2025, held on March 19-20 at the InterContinental London – the O2, reaffirmed its status as the premier conference for insurance technology professionals across the continent. Drawing more than 6,000 attendees from over 80 countries, the event brought together C-level executives, startup founders, investors, and tech leaders. They explored the evolving future of insurance powered by innovation and digital transformation.
Key Themes
With seven stages and over 400 speakers, the conference agenda was packed with compelling keynotes, forward-looking panel discussions, fireside chats, and practical workshops.
The overarching theme of the 2025 edition was crystal clear: artificial intelligence (AI) is no longer a futuristic concept, it’s the driving force behind today’s insurance innovation. Topics like automation, generative AI, claims transformation, underwriting analytics, embedded insurance, cyber security, and ESG all reflected a dynamic industry poised for rapid acceleration.
A Focus on Leadership & Diversity
One of the standout sessions was the panel discussion titled “The ROI of Gender Diversity: Breaking the Glass Ceiling for Women in Leadership”, held on the Purple Stage. Featuring high-level voices from Solera, unlock VC, and AXA XL, the panel addressed the often-overlooked yet crucial importance of gender diversity in executive roles. The discussion didn’t stop at raising awareness; it presented measurable business outcomes tied to diverse leadership and called for action to foster inclusivity across all levels of the industry.
Complementing this session was “The Women in Insurance Power Group Meet-up”, a networking event held at the Sky Bar on the 18th floor. Attendees not only connected over lunch but were also invited into an exclusive WhatsApp group, encouraging long-term collaboration and support among female leaders and allies in the space.
The Innovators Hub and the ITI Marquee: Where the Future Was Born
A major addition to this year’s conference was the debut of the ITI Marquee. A vibrant, purpose-built zone dedicated to showcasing bold ideas and startup brilliance. This space housed the Innovators Hub, which included its own dedicated Innovator’s Stage. Here, early-stage ventures and InsurTech pioneers pitched their solutions to panels of VCs, corporate innovation leads, and fellow founders.
This setting offered more than exposure, It cultivated real-time connections between startups and investors, giving many smaller players their first shot at meaningful partnerships or funding opportunities. The diversity of ideas, from AI-powered claims processors to data-driven risk models for climate insurance, reflected the industry’s hunger for next-gen solutions.
Keynote InsurTech Highlights
One of the most talked-about moments of the event came from Daniel Schreiber, CEO and Co-Founder of Lemonade, whose opening keynote explored how AI can dramatically enhance customer experience in insurance. He challenged the audience to rethink not just how insurance is sold or serviced, but why it’s offered. And how technology can transform its social impact.
Another crowd favourite was the session on “The Path to Embedded Insurance”, which unpacked how insurance products are increasingly being bundled into digital ecosystems like ecommerce platforms, mobility apps, and smart home technologies. This wasn’t just a hype piece. Real-world case studies from European neobanks and auto insurers illustrated how embedded models are already driving customer growth and retention.
Among the compelling keynotes on the Main Stage, Sofia Kyriakopoulou, a Fintech Strategy AI Champion and Group Chief Data & Analytics Officer at SCOR, revealed how GenAI innovation at one of the world’s largest reinsurers is transcending the realm of proof of concepts to become fully productive.
InsurTech Deep Dives: AI, Data & Digital Claims
Sessions throughout the week made it clear that AI is at the forefront of virtually every area of insurance operations. Whether it was applied in predictive underwriting, fraud detection, or personalised customer engagement, companies are looking to AI not just for marginal gains but foundational transformation.
A standout workshop on AI in Claims Automation included live demos from startups using computer vision and NLP to automate damage assessment. Meanwhile, a session on Data-Driven Underwriting shared how insurers are replacing traditional risk proxies with real-time data streams, from wearables to smart meters.
Cybersecurity was another hot topic, with insurers discussing how to build resilient cyber products in the face of increasing digital threats and regulatory complexity.
Global Meets Local: The Power of Diversity
Although a European event at heart, the conference had a distinctly global flair. Speakers came from the U.S., Singapore, Brazil, South Africa, and the Middle East. They brought diverse perspectives on shared challenges such as climate change, digital regulation, and consumer trust.
Simultaneously, European startups shone on stage. Companies from the UK, Nordics, DACH, and Benelux presented innovative, often niche solutions for localised market challenges—from parametric crop insurance to real-time mobility coverage.
Trade Exhibition & Brand Visibility
The exhibition floor was a hive of activity, featuring booths from established players like Munich Re, Swiss Re, Guidewire, Duck Creek, and Cognizant, alongside vibrant startup showcases. Product demos, swag giveaways, and live challenges kept engagement high and made it easy for brands to stand out.
The conference proved to be a golden opportunity for brand elevation, allowing companies to position themselves as thought leaders or rising disruptors in front of an incredibly curated audience.
InsurTech Insights Europe: The Verdict
The closing remarks from Kristoffer Lundberg, CEO of InsurTech Insights, captured the spirit of the event:
“It’s a privilege for us to gather together the sharpest minds in the industry to discuss the role of AI in insurance. The direction and impact of these technologies will shape the space for decades to come.”
Indeed, InsurTech Insights Europe 2025 wasn’t just a conference, it was a strategic gathering. A melting pot of ideas and a launchpad for the next generation of insurance products and platforms. Attendees walked away not just with new business cards, but with fresh ideas, collaborative leads, and the motivation to drive innovation within their own organisations.
As the insurance industry continues to evolve amid mounting global challenges and rapidly advancing tech, this event served as a timely and energising reminder… The future is not something to wait for—it’s something to build, together.