Todd Moore, Global Vice President, Data Security Products at Thales, on why making AI security a boardroom priority today, will help firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation

Financial Services organisations are responsible for some of the biggest growth in the global economy. Equally, they’re some of the most vulnerable. Like many other sectors, they’re racing to embrace AI, but with adoption comes new security risks.

According to Thales’ Data Threat Report: Financial Services Edition 81% of FinServ organisations are now investing in GenAI-specific security tools, with nearly a quarter using newly allocated budget. This surge in funding marks a turning point: AI security has moved from being an IT concern to a boardroom priority.

The fact that new budget lines are being carved out specifically for AI security signals a fundamental shift in corporate strategy. Boards increasingly recognise that protecting AI systems is as critical as safeguarding payment rails or core banking infrastructure. For an industry built on trust, resilience, and regulatory compliance, this investment wave shows how central AI has become to both risk management and competitive growth.

Balancing AI Innovation and Security

While FinServ organisations are aware of the security risks AI poses, they’re also seizing upon the opportunities it presents. The report has found that in 2024, FinServ businesses outpaced the broader market in AI deployment, leading in enabling employees to use AI and ahead in AI integration, which has continued into 2025. Additionally, 45% say they’re in the ‘integration’ or ‘transformation’ phases of their GenAI journey, compared to just 33% across wider industries.

AI’s ability to accelerate services, automate processes, and analyse data at scale makes it an exciting prospect, especially in the financial sector. This makes securing AI systems a priority for FinServ organisations, with increased GenAI integration reflecting developing organisational maturity and progress beyond experimentation.

The Risk

Yet the scale of opportunity is matched by the scale of challenge. AI systems require vast amounts of structured and unstructured data to conduct analysis and make recommendations.

For FinServ organisations, this often includes highly sensitive customer and transactional information, proprietary algorithms, and records bound by strict regulatory oversight. The risk is not only about whether AI systems themselves are secure, but whether the data they’re working from is accurate, as well as whether their adoption inadvertently creates new routes to data exposure and exfiltration.

Businesses need a clear strategy to fully understand how AI models are operating within their IT infrastructure, the applications they’re interacting with, and the data they’re accessing and pulling from.

The Response

Balancing AI’s opportunity and risk means embedding security at every stage, from design to deployment and ongoing monitoring. Newly allocated budgets for AI security, with nearly a quarter of FinServ firms making such investments, show how central AI has become to board-level strategy. These investments move firms beyond reactive fixes to proactive frameworks that evolve with the technology. AI security is no longer just an IT concern, it’s a strategic priority requiring collaboration between security, compliance, and business leaders. By factoring risk into early planning, organisations can align innovation with responsibility and build resilience for the long term.

Pioneering AI Security

Building on investment in AI-specific security is only the beginning. As scrutiny intensifies, the firms that will lead are those that treat AI security as integral to business strategy, not a bolt-on layer. Success will require visibility into how models behave, continuous validation against emerging risks, and adaptive controls that evolve with the threat landscape.

The financial services organisations that embed these safeguards into their core infrastructure will protect sensitive data as well as setting a benchmark for resilience and trust in an AI-driven economy. By making AI security a boardroom priority today, these firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation.

Thales: AI is the New Insider Threat 

Thales 2026 Data Threat Report Finds 70% of Organisations Rank AI as Top Data Security Risk

Data security has taken centre stage as the success of enterprise AI initiatives increasingly hinges on consistent, controlled access to proprietary organisational data sources. The 2026 Thales Data Threat Report examines the complex calculus that organizations must undertake to enable innovation while securing their most valuable asset – their data.

This research was based on a global survey of 3,120 respondents fielded via web survey with targeted populations for each country, aimed at professionals in security and IT management. 

Read the Report

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech

Jamil Jiva, Global Head of Asset Management at Linedata, on why the next chapter of AI-driven finance will be shaped not just by technology, but by creativity

Beyond Data: Where AI Finds Unexpected Inspiration

The discussion about training AI largely focuses on concerns that accessible, human-generated data is limited and may soon run out completely. If this is the case, how can technology that depends on a seemingly endless stream of inputs to iterate, test, and adapt deliver the results we expect? AI relies on structured, high-quality data to thrive, but what happens when we run out of spreadsheets and financial models to train AI? We need new data sources to ensure it continues to learn, adapt, and deliver accurate insights. Video games stand out as offering some of the richest, most expansive, and complex environments for AI training.

At first glance, video games and financial operations seem to belong to entirely separate worlds. However, AI connects these domains, with models leveraging virtual-world training to tackle real-world financial tasks. Financial documents such as credit agreements and tax returns are often convoluted, unstructured, and labour-intensive to process. Therefore, AI designed to interpret such data must possess strategic reasoning, real-time adaptability, and advanced pattern recognition. So, could video games be the ideal training ground?

Contrary to popular belief, gameplay can significantly improve how people think, learn, and solve problems. The abilities required to excel at video games closely reflect the skills AI systems must acquire today.

Levelling Up: What Virtual Worlds Teach Machines

Practice leads to proficiency, a principle that applies to both humans and AI. Interestingly, many of the most significant advances in AI development have emerged not from conventional data training, but from taking creative approaches. Games push AI to emulate human thinking and sharpen its statistical intuition.

These game-trained models are neither expensive nor heavily reliant on resources, and they sidestep the issue of data scarcity. As a result, they are actively shaping the future of financial intelligence. The examples below offer a clear demonstration of the potential of gameplay.

Virtual Economies: Lessons from World of Warcraft

World of Warcraft, with millions of players interacting in an immersive and dynamic world, features an economy that closely mirrors real-world financial systems, complete with inflation, supply and demand cycles, and fraud risks. The game even inspired one of the most renowned epidemiological studies: when the in-game ‘Corrupted Blood’ plague spread unpredictably, scientists used it as a model for real-world pandemic simulations.

Financial models depend on vast, interconnected data networks, much like the economy in World of Warcraft. Organisations employ AI to continuously monitor patterns, detect anomalies such as fraud or misstatements, and optimise data extraction for financial reporting, mirroring the way AI analyses virtual economies.

Urban Chaos: GTA V and Real-World Simulation

While Grand Theft Auto (GTA) V is famous for its open-world chaos, researchers have leveraged its traffic systems and non-player character behaviours to train AI for applications such as self-driving cars, crime pattern recognition, and urban planning. At its heart, GTA provides a platform for AI to process vast amounts of unstructured data in real time.

Similarly, financial institutions manage millions of data points from a wide range of sources. Their AI tools must automatically extract insights, classify information, and normalise complex formats. GTA serves as a controlled yet intricate environment for simulating scenarios, enabling AI to optimise for real-world tasks through ongoing feedback loops.

Sandbox Creativity: Minecraft and Adaptive Thinking

Minecraft provides a sandbox environment where AI learns through exploration. OpenAI even trained an AI to play Minecraft by watching YouTube tutorials, closely mimicking the way humans learn. Similarly, any AI used by financial institutions must be able to self-learn from new document types and structures, adapting just as a Minecraft AI learns to survive.

Reinforcement learning, where AI improves based on feedback, is a key element of intelligent document processing. Thanks to its vast scalability and dynamic, hierarchical environments, Minecraft serves as an ideal setting for navigation and repeated feedback loops, helping models develop domain-flexible reasoning.

Multiplayer Mayhem: Dota 2 and the Art of Teamwork

Dota 2 stands out as one of the most complex competitive games ever created, presenting AI with challenges in real-time decision-making, strategic coordination, and adaptability. OpenAI Five, trained on the equivalent of 45,000 years of gameplay within just 10 months, managed to defeat renowned, professional human teams. As anyone who has mastered StarCraft knows, tactical adaptability is essential for gaining the upper hand.

Financial institutions operate in environments that are just as dynamic as the shifting levels of a video game. Market conditions, regulations, and data formats are in constant flux. AI must be able to adjust to new document structures, handle missing information, and navigate edge cases, much like AlphaStar adapts to an opponent’s unpredictable strategies.

From Pixels to Profits: Bringing Game Logic to Finance

Whether to streamline operations, mitigate risks, or make informed decisions in today’s data-intensive financial landscape, AI has the potential to fundamentally transform financial offerings, delivering personalised and evolving experiences that foster understanding and combine seamlessness with regulatory compliance.

Yet AI does not simply require more data from which to learn; it needs better data. Video games offer near limitless, pre-built, highly complex digital worlds where AI can test hypotheses, simulate scenarios, and refine decision-making models. By utilising these unique environments, AI is challenged to enhance its speed, accuracy, and efficiency. 

The world of video games has many lessons we can learn when building AI, and given AI’s remarkable ability for transferable learning, it makes sense to leverage these pre-trained models to power essential financial workflows. It is more than just document processing; it is thinking, and the same intelligence that enables AI to defeat world champions in Dota 2 is now driving the next generation of financial AI solutions.

The next chapter of AI-driven finance will be shaped not just by technology, but by creativity. By embracing unconventional data sources such as the immersive complexity of video games, industry leaders will unlock new possibilities for personalisation, security, and customer engagement.

Learn more at linedata.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Neobanking

Richard Doherty, Head of Wealth & Asset Management, Publicis Sapient, on how asset managers must redesign their enterprise for AI-driven decision intelligence

The asset management industry is entering a structural inflexion point. The first wave of AI focused on improving productivity through copilots and automation. The next wave will fundamentally reshape how decisions are made, executed, and governed across the enterprise. This is not a technology upgrade. It is an operating model shift.

Despite significant investment, many firms remain trapped in fragmented AI experimentation. A majority are yet to realise meaningful economic returns from AI, not due to lack of capability, but due to a failure to redesign how intelligence is applied across the organisation. The gap between ambition and outcome is not a technology problem. It is a structural one.

From Automation to Decision Intelligence

The industry conversation has evolved. The question is no longer whether to adopt AI, but how to scale it across the enterprise. However, most firms are still approaching this challenge through the lens of automation, identifying tasks that can be executed faster or at lower cost. This delivers incremental value, but does not address the underlying constraint: the structure of decision-making within the organisation.

Traditional operating models are built around sequential workflows. Work moves from function to function: research, compliance, operations, and distribution, each dependent on the previous stage. This creates latency, duplication, and fragmentation. Agentic operating models shift the focus from tasks to decisions.

Instead of asking “Which processes can we automate?”, leading firms are asking: “Which decisions can be augmented or owned by intelligent systems?”

This shift enables organisations to move from sequential workflows to parallel decision systems; from human-led analysis to AI-assisted reasoning; from periodic insight to continuous intelligence. The result is not a marginal improvement. It is a step-change in how the enterprise operates.

The Pressures Driving Change

This transformation is not happening in a vacuum. Asset managers face mounting structural pressures: margin compression driven by fee pressure and passive competition; rising operational complexity from regulation and product proliferation; and advisor capacity constraints that limit scalable growth. Agentic operating models directly address all three.

By automating complex workflows, rather than individual tasks, firms can significantly increase advisor and analyst capacity without proportional cost increases. Parallel decision systems reduce the time required to launch products, respond to market events, and deliver client insights. This compresses cycles from months to days. Continuous monitoring of guidelines, portfolios, and operational processes reduces exposure to regulatory breaches and operational failures.

These are not theoretical benefits. They represent measurable improvements in cost-to-serve, time-to-market, and operational resilience.

Not all Intelligence is the Same

To scale AI effectively, organisations must recognise that not all problems require the same type of intelligence. Enterprise AI operates across three distinct layers, and conflating them is one of the primary reasons AI initiatives fail to scale.

Deterministic systems execute predefined rules with complete consistency. They are essential for functions where there is zero tolerance for error, trade validation, settlement processing, and regulatory reporting. If a business outcome must be identical every time, deterministic logic remains the correct approach.

Predictive systems use historical data to forecast outcomes. Applied in areas such as portfolio risk modelling, fraud detection, and client churn prediction, they generate probabilities and insights, but they do not interpret context or make decisions independently.

Agentic systems operate where problems require interpretation, judgment, and contextual understanding, investment guideline interpretation, regulatory document analysis, portfolio insights, and client communication. These systems can reason across complex information, generate insights, and take action within defined boundaries.

The ‘Different but Valid’ Dilemma

A critical challenge in adopting agentic systems is understanding how they behave. Traditional software produces identical outputs. Agentic systems produce reasoned outputs.

This introduces what I call the ‘different but valid’ dilemma. An agent may take a different reasoning path from a human and arrive at a different, but still correct, conclusion. This variability is not an error. It is inherent to reasoning systems.

The real risk lies in hallucination, outputs that are not grounded in data or evidence. Managing this requires organisations to clearly define where variability is acceptable. All AI-driven processes sit on a spectrum: deterministic actions with no variability (trade execution), predictive actions with controlled variability (risk scoring), and agentic actions with higher variability (investment insights).

Leading firms design systems where agents perform reasoning, deterministic systems enforce execution, and humans retain oversight on high-consequence decisions. This balance enables both flexibility and control.

The Operating Model Shift

The most significant change is not technological; it is organisational. Traditional models are built on functional workflows. Agentic models are built on coordinated decision systems.

Consider what launching a new investment product looks like under each model. In a traditional model, it involves sequential handoffs between teams, compliance reviews the guidelines, operations configures the systems, and distribution drafts the client narrative. Each stage waits for the last.

In an agentic model, intelligent systems operate in parallel: compliance agents interpret guidelines, operations agents configure constraints, distribution agents generate client narratives, and governance agents validate outputs. This orchestration compresses timelines, reduces friction, and enables continuous decision-making. It represents a fundamental redesign of how work is performed.

Governance: the Foundation for Trust

Trust is the prerequisite for scaling AI. Without it, adoption stalls, not because the technology fails, but because the organisation cannot adequately explain or defend the decisions it makes.

Leading firms implement governance models built on three principles. First, explainability: every decision must be traceable and auditable. Second, authority boundaries: agents operate within clearly defined limits. Third, human oversight: high-consequence decisions remain under human control.

Regulatory expectations will continue to evolve, but one principle remains constant: organisations must be able to explain how decisions are made.

Scaling AI is a Leadership Challenge

Executives must take a deliberate approach across four areas:

  • Define the intelligence model: map business problems to deterministic, predictive, or agentic systems.
  • Build the foundation: invest in data, infrastructure, and orchestration capabilities.
  • Redesign the operating model: shift from workflows to decision systems.
  • Implement governance to ensure transparency, control, and compliance.

Start with high-value use cases and expand rapidly across the enterprise. The firms that act now will establish a structural advantage in cost, speed, and decision quality. Those that do not risk being constrained by legacy operating models that cannot scale with the demands of modern markets.

The Question is not if, it is Who

The industry is not simply adopting new technology. It is redefining how decisions are made. The firms that succeed will not be those that deploy AI tools in isolation. They will be those who design the right form of intelligence for each problem, redesign their operating models around intelligent systems, and scale agentic capabilities across the enterprise.

This shift is already underway. The question is no longer whether it will happen. The question is which firms will lead, and which will be forced to follow.

Learn more at publicissapient.com

  • Artificial Intelligence in FinTech
  • Blockchain & Crypto
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech

Martijn Gribnauis, Chief Customer Success Officer at Quant, on why Agentic AI will redefine financial services

A recent Google Cloud survey showed that only 13% of finance organisations are currently using agentic artificial intelligence. This number needs to, and will rise when you consider that 88% of financial leaders are seeing ROI from generative AI already. Agentic is the next and most advanced evolution of artificial intelligence the world has ever seen. 

Agentic AI is not on the way. It is here and already reshaping how forward-leaning financial institutions operate. In 2026, for IT and finance leaders to build an insurmountable competitive lead they must deploy agentic AI in every area where it can safely and effectively create value. The institutions that hesitate will find their business models under threat from familiar competitors and newcomers alike.

Reinvention of Core Processes

Agentic AI is poised to reinvent core financial processes. Bookkeeping, record maintenance, and period-end close are nearing complete automation. Month-end processes that once required late-night, stress-filled marathons will evolve into continuous, largely automated cycles. IT teams will no longer spend evenings on high alert waiting for failures. 

This shift also frees IT leaders, finance teams, and operations functions from monotonous repetitive tasks. Instead of focusing on system uptime and manual reconciliation, they will collaborate with the C-suite on strategic initiatives that drive growth and revenue. 

Understanding Why Adoption Is So Low

Despite the promise of Agentic AI, there is understandable caution. Some 80% of organisations have reported ‘risky behaviour’ from AI agents, and in the world of finance that is an alarming number. Finance is one of the most regulated, risk-averse sectors in the world. The fear of losing control remains the primary reason so few in the industry have embraced Agentic AI.

Loss of control and fear of catastrophic error

Financial leaders fear that an autonomous system could go ‘off script’, mis-route payments, misinterpret rules, or inadvertently cause compliance breaches. In finance, even small errors can trigger major financial or regulatory consequences.

Security and data privacy concerns

Large AI models require huge quantities of sensitive data. Organisations worry about breaches, cyber-attacks, or manipulation. An AI agent with improperly configured permissions could, in theory, execute fraudulent transactions or expose confidential customer information.

Bias and fairness risks

If AI agents make decisions using incomplete or fragmented data, they risk perpetuating or amplifying bias. At scale, biased decision-making can undermine customer trust and expose firms to legal and regulatory challenges.

Regulatory ambiguity and audit difficulty

Regulators are still determining how to govern agentic AI. Some organisations fear that early adoption could unintentionally violate rules or create future audit vulnerabilities.

These fears are legitimate, but not insurmountable.

Tackling the Adoption Barriers: A Practical Blueprint for Finance Leaders

To capitalise on Agentic AI’s immense potential, leaders must take a structured approach grounded in business value, security, and trust.

1. Start With Clear, Measurable ROI and Efficiency Gains

In finance, adoption accelerates when decision-makers see proof of value.

Start by automating repetitive processes. Agentic AI can handle tasks like data entry, reconciliation, invoice matching, and initial fraud checks faster and more accurately than humans. This leads to reduced operational overhead as automation lowers labour costs, shortens processing times, and reduces error rates. Demonstrating these savings through case studies or internal pilots is critical to changing minds. 

AI agents can enable revenue growth by analysing huge data sets to identify new investment opportunities, optimise trading strategies, and generate personalised product recommendations. Each of these capabilities directly impacts top-line growth.

2. Strengthen Risk Management and Compliance Through AI

Agentic AI will improve risk management when deployed responsibly. This starts with real-time fraud detection. AI agents can monitor transactions continuously, identifying patterns that suggest fraud long before traditional systems would detect an anomaly.

Continuous monitoring is also incredibly helpful when it comes to compliance. AI agents excel at ensuring adherence to KYC and AML regulations. They can automatically maintain audit trails, identify missing documentation, flag anomalies, and escalate issues instantly.

Enhanced stress testing and scenario modelling can both be completed via Agentic AI. It can simulate complex market environments more dynamically than legacy tools, providing deeper insights into vulnerabilities and improving resilience. When showcased and presented in this context, agentic AI becomes a risk-reduction tool in the eyes of decision makers. 

3. Directly Address Security and Trust Concerns

Trust is the cornerstone of adoption. Implement enterprise-grade security architecture that includes encryption, secure APIs, strict access controls, and continuous monitoring of agent behaviour. And, use explainable and transparent AI systems (XAI) so your finance teams understand the reasoning behind decisions. XAI helps provide interpretable outputs that support auditability and regulatory compliance.

Start small with a controlled, low-risk pilot. A proof-of-concept in a non-critical workflow helps teams understand the technology, gather evidence, and build internal support before scaling. Produce numbers based reporting that speaks the language of the people who make the decisions. Show, don’t just tell them how agentic will move the business forward.

4. Highlight the Competitive Advantage

Agentic AI adoption is not just an efficiency upgrade. It is a competitive imperative. AI agents create faster innovation cycles by accelerating product development, service delivery, and operational improvements.

They also provide superior customer experience. From instant account servicing to personalised financial recommendations, Agentic AI delivers the speed, personalisation, and convenience customers expect. Plus, it scales exponentially. No matter how many people call in at the same time, an agentic agent will answer immediately. Agentic AI reduces up to 86% of time spent in complex workflows that were traditionally handled only by people. This will be huge in getting ahead of your competition. 

5. Build Momentum Through Internal Champions

Adoption increases when respected leaders advocate from within. Mid-level managers, AI-literate staff, or members of the C-suite who understand the technology can serve as champions. Use them and their beliefs to drive alignment, communicate benefits, and counter misconceptions. The more people from different departments and levels of the organisation that talk up the technology, the more likely you are to get buy-in. 

Your Time is Now

Agentic AI will redefine financial services. The organisations that act today will build capabilities, insights, and competitive advantages that late adopters will not be able to replicate. Finance leaders must begin asking where agentic AI can support their business, where it can remove friction, where it can unlock growth, and where it can transform operations. The firms that act now will lead the industry. Those that hesitate will not get the chance to catch up.

The only remaining question for finance organisations is not whether agentic AI will change the industry, but how quickly they choose to deploy it.

Learn more at quant.ai

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy

Dr. Yvonne Bernard, CTO at Hornetsecurity, on meeting the challenge of managing the speed of AI adoption and harnessing its defensive capabilities while mitigating the risk of uncontrolled adoption

The past year has been defined by acceleration. Threat actors rapidly embraced automation, AI, and social engineering. Scaling their tactics at unprecedented speed, while defenders raced to keep pace. Historically, defensive resilience evolves in step with attacker innovation, but in 2025 that balance began to falter.

In an analysis of over 6 billion monthly emails, Hornetsecurity’s Security Labs found that the volume of sophisticated threats grew faster than most security teams could adapt to. Malware-infected emails soared by 131%, scams increased by nearly 35%, and phishing attempts – powered by access to advanced AI – rose by 21% from the previous year.

Typically, attacks, even at volume, are easily filtered by good firewalls and secure email gateways. But the sophistication and AI-led nature of 2025’s boom made it even harder for organisations to defend themselves. The question now is: can security teams and businesses wrestle back control?

Evolving Cyberattack Landscape

​​AI enhances efficiency and precision. As such, cybercriminals use it to launch faster, more convincing and adaptive attacks, ranging from deepfakes to credential stuffing. As an example, there is a concerning trend of attackers increasingly using ‘MFA bypass kits’ to create deceptive login pages. These pages capture not only the user’s credentials but also have logic built in to handle MFA prompts as well. ​​The unsuspecting user is then passed to the real login page for the target service and meanwhile the ‘kit’ grabs a copy of the user’s session token. This allows the attacker to impersonate the person and access their data. ​​​​​

Examples of such kits include Evilginx (open source) and the W3LL panel. Protecting against these attacks can be challenging, as they are adept at bypassing MFA safeguards. Threat actors often use compromised LinkedIn accounts, for example, to gain access to substantial information and connections. This enables them to impersonate trusted business connections. Paired with the weaponisation of Agentic AI, this will magnify existing vulnerabilities within an organisation, while introducing new ones that defy traditional containment models.

As it stands, the lack of oversight within organisations on the extent of AI’s adoption by cybercriminals has enabled the emergence of ‘Ransomware 3.0.’ Ransomware has evolved past simple encryption and exfiltration, with this next phase focusing on LLM-driven orchestration and a shift to data integrity manipulation.

To counter AI-accelerated compromises and ‘Ransomware 3.0’ in 2026, organisations must adopt a Zero Trust-based cyber resiliency strategy. This requires businesses to implement strong, non-phishable machine authentication, strict least-privilege access, and constant monitoring to protect the integrity of the data that users and AI agents can access. It should become the baseline expectations rather than aspirational goals for this year.

The Secret Value of ‘Least Privilege’ Access

Another strategy to proactively improve cybersecurity defences in 2026 is to enforce the principle of ‘least privilege’ access. This tactic grants users access only to the data that’s needed for their role. Limiting excessive access is important for preventing the potential for widespread data exposure and damage in the case of an account compromise.

Businesses, however, must strike a balance over access; if it’s too strict, it can hinder productivity and lead to shadow IT issues. Getting this balance right when it comes to privileged access is where sophisticated permission managers are invaluable tools to work with. They streamline the process and remove the guessing game of who and what to grant access to, thereby ensuring, in the case of an attack, that the entire organisation won’t be brought to its knees.

How CISOs are Adopting ‘Resilience, not Perfection’

The rate at which AI is advancing means not every organisation will be equipped with the tools or the know-how to tackle every AI-inspired attack. But as the saying goes, ‘prevention is better than cure’. It’s better to create a strong security culture than to continually chase after the next best tool. 

Organisations can’t strengthen their resilience without involving every single person under their umbrella. That’s why CISOs must continue to invest in cybersecurity awareness programs.

These should include simulated AI-phishing attacks (phishing remains the number one attack vector) to test users and enable them to apply learnings from the modules.

If any user clicks on a phishing email, they should receive additional training at that very moment, to cement the learning. Over time, a good training system should automatically identify users who rarely fall for such attacks and reduce the training they receive while making the simulations they do receive more difficult. Conversely, giving persistent offenders additional bite-sized training and simulations can help improve security outcomes over time.

The key challenge for 2026 is managing the speed of AI adoption and harnessing its defensive capabilities while mitigating the risk of uncontrolled adoption. But with excellent training, cyberattack practice runs, and the adoption of Zero Trust principles, organisations will find themselves in a strong position.

About Dr. Yvonne Bernard

Dr. Yvonne Bernard is the CTO of Hornetsecurity by Proofpoint, Proofpoint’s business unit leveraging the Hornetsecurity product suite dedicated to managed service providers (MSPs) and small to mid-sized businesses (SMBs), providing next-generation cloud-based security, compliance, backup, and security awareness solutions that help companies and organisations of all sizes around the world.

Learn more at hornetsecurity.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy

Dr Megha Kumar, Chief Product Officer and Head of Geopolitical Risk at CyXcel, on whether our risk and regulatory frameworks and institutional cultures can keep pace with Agentic AI

Within the next couple of years, Agentic AI is likely to progress from early stages of operation to be fully embedded within systems. Its expansion will be subtle rather than spectacular. It will integrate steadily into enterprise platforms, logistics networks, compliance workflows, cybersecurity operations centres and executive decision-support tools. Processes will move faster, operating expenses will decline and performance indicators will trend upward.

Yet these visible improvements mask a deeper challenge. The regulatory exposure, data governance pressures and erosion-of-trust risks associated with Agentic AI are being misjudged.

Unlike earlier AI applications designed primarily to generate outputs – whether text, imagery, or predictive insights – agentic systems are built to act. They sequence decisions, draw from multiple data environments, initiate consequential processes and function at scale with differing levels of human supervision. In sandbox environments this can seem contained and controllable. Over extended periods in live environments, however, sustained oversight, traceability and effective governance become significantly more complex.

Evolving Operational Complexity

There are two key challenges that businesses must address.

First, how do organisations monitor what agentic systems are doing once deployed? These systems evolve through updates, integrations and retraining and they interact with new data environments.

Second, how do you ensure responsible behaviour throughout the lifecycle? Regulators, policymakers and customers will likely expect firms to shift from compliance assurance to risk assurance and demonstrable evidence of trust and transparency.

The prevailing assumption is that human oversight will mitigate these risks. Human in the loop or human over the loop has become the default reassurance. In practice, however, that assumption breaks down far faster than many anticipate.

When a system works 95 per cent of the time, human reviewers limit their scrutiny. Behavioural science tells us that automation bias and complacency occur when automated systems are high-performing. Employees often become validators of AI outputs rather than critical examiners. The diligence gap widens gradually and then suddenly.

Facing Up to Difficult Questions

How do you incentivise employees to remain diligent checkers when the system mostly ‘works’?  And how much time does effective oversight actually require? True review is not a cursory glance at a dashboard. It involves interrogating assumptions, validating inputs, checking context and assessing downstream consequences. In many cases, meaningful oversight may take nearly as long as performing the original task manually. When checking becomes more costly than doing the job yourself, pressure to ‘trust the system’ intensifies.

And what happens to accountability when oversight exists on paper but not in practice? Governance documentation may show layered review structures, escalation pathways and audit processes. Yet if humans are functionally disengaged, responsibility becomes dispersed. When errors surface, organisations may struggle to attribute fault – was it the model design, the data, the integrator, the operator or the reviewer who signed off without fully scrutinising?

Regulators are only beginning to grapple with these realities. In jurisdictions such as the European Union, the EU AI Act introduces risk-based obligations, documentation requirements and human oversight provisions. These are important steps, however, the operationalisation of those requirements in dynamic, agentic environments remain untested at scale. Compliance on paper will not automatically translate into resilient governance in practice.

Addressing the Trust Challenge

Beyond regulatory exposure, there is a broader trust challenge emerging.

As Agentic AI systems scale across industries, they will generate vast volumes of automated outputs – reports, communications, risk assessments, content, decisions and transactions. If errors or manipulations spread through interconnected systems, confidence in digital outputs may erode.

In geopolitically sensitive contexts, this has profound implications. Agentic systems interacting with external data sources could amplify disinformation, introduce biased datasets or make decisions based on manipulated inputs. The speed of automation may outpace the speed of verification. Trust, once diluted, is difficult to restore.

Data protection risks will also intensify. Agentic systems frequently require broad access privileges to perform tasks effectively. They may access internal databases and personal data and interact with third-party platforms. Each interaction creates potential exposure points. A single misconfiguration or prompt injection attack could trigger cascading consequences across systems.

The next phase of AI adoption will not simply amplify productivity: it will amplify regulatory, legal and reputational risk. This moment therefore demands serious scrutiny before agentic AI becomes deeply embedded in business infrastructure.

The Moment for Action has Arrived

So, what should organisations be doing now?

To begin with, organisations need to look past superficial, tick-box compliance. Effective governance cannot live solely in policy documents – it must function in day-to-day operations. This means investing in continuous monitoring capabilities, robust audit trails and real-time anomaly detection tailored specifically to Agentic AI behaviours.

In parallel, incentive structures should be redesigned. Meaningful human oversight will not happen if it is treated as secondary to speed or output. If employees are expected to provide meaningful review, organisations must allocate time, training and authority accordingly. Performance metrics should reflect risk management responsibilities, not just output rate.

Clear lines of accountability are equally important. Senior leadership and boards should determine who carries ultimate responsibility for outcomes produced by agents. Where third-party vendors are involved, responsibilities must be contractually and operationally defined. Incident response mechanisms should be rehearsed in advance, rather than presumed to work when pressure is high.

Expertise must also be integrated across functions. Legal, risk, compliance, cybersecurity, data protection and operational teams should be engaged from the outset. Deploying Agentic AI is not simply a technical upgrade – it reshapes the organisation’s risk profile.

Finally, resilience demands deliberate stress-testing. Leaders should examine not only pathways to success but how models fail at scale. How would the organisation respond if a system update embedded systemic bias, if an integration vulnerability enabled unauthorised activity or if automated actions eroded customer confidence? Rigorous scenario exercises, however uncomfortable, are essential to building genuine preparedness.

As Agentic AI advances, Risk Management Should Match its Pace

None of this is an argument against adoption. Agentic AI presents meaningful productivity improvements and the potential for sustained competitive differentiation. Organisations that deploy it with discipline and foresight may secure a measurable advantage. The danger lies not in adoption itself, but in pursuing acceleration without knowing the risks and putting the right guardrails in place.

The coming two years are critical for businesses. Before these systems become deeply embedded in core processes, organisations have an opportunity to shape the control environment around them.  However, once agentic systems are fully embedded, retrofitting controls will be far more difficult and costly. Leaders must therefore treat this period as a design phase for oversight, not merely a race for competitive advantage.

Agentic AI is advancing rapidly. The defining question is whether our risk and regulatory frameworks and institutional cultures can evolve just as quickly.

Learn more at cyxcel.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

As companies pour billions into developing their own AI tools, Fayola-Maria Jack, Founder and CEO of Resolutiion, argues that many are forgetting what worked well in the early tech era, confusing ownership with innovation

Back in the very early days of computing, organisations rarely hesitated to buy the hardware and software they needed to modernise. Now we’re deep into the AI age. Many organisations are deciding the best approach to adopting the technology is to take building it into their own hands. 

Many of the more traditional companies, like big banks, have publicly stated that they’re developing their own AI tools in house. Meanwhile, corporate investment in AI reached £191 billion ($252.3 billion) in 2024 and is only likely to have risen since.. 

Yet, the challenges of internal AI development are becoming abundantly clear. A recent report from MIT found that 95% of AI pilot projects failed to deliver any discernible financial savings or uplift in profits. It also found companies purchasing AI tools succeed about 67% of the time. Meanwhile, internal builds succeed only one-third as often.

Why do companies feel they need to build their own AI tools?

Those statistics alone show buying AI from specialised vendors and building partnerships is often the wiser choice. But, with a handful of traditional businesses deciding to lean the other way, it begs the question: why are these companies not only initially choosing the in-house route, but also persisting with it despite low success rates? 

The instinct to ‘build’ is rooted in legacy thinking – and to some extent, a naivety around what makes AI solutions special. Traditional enterprises have long equated ownership with control: control over systems, data, and perceived competitive advantage. 

When AI entered the scene, many executives applied that same logic, assuming that building in-house equated to ownership, at the heart of innovation. But this overlooks a fundamental truth that is unique to AI – AI isn’t another IT system you can own and stabilise. It evolves exponentially, not linearly. It demands constant retraining, rapid iteration, and deep specialisation – all at odds with the traditional corporate IT environment, which is built for stability and compliance, not experimentation and speed. 

Are companies really investing in innovation?

Another common belief is that buying is seen as conceding leadership to outsiders. While building feels safer politically, signalling ‘we’re investing in innovation’. Ironically, though, that safety is often an illusion that leads to slower progress and higher long-term cost. But again, there is deep irony if talent is outsourced to India, or another foreign jurisdiction, on the basis of cheap labour.

The exact same dynamic plays out internally, too. AI initiatives are career-defining projects for senior technology leaders and they attract budget, visibility, and prestige. Once a build programme is launched, it’s politically difficult to pivot, even in the face of poor performance. As a result, the build strategy often survives by narrative rather than by evidence.

Underpinning all of this is the institutional belief that ‘our data is unique’ – that their data will deliver proprietary insight and competitive advantage. In reality, most internal data is messy, siloed, and outdated. It reflects years of practices that are often misaligned with best practice, and therefore should never be used to train AI. Instead of building capability, many organisations end up building complexity. 

Increased Caution in Regulated Sectors

Alongside these misbeliefs, regulatory caution and data residency also play into the decision to build in-house; especially in regulated sectors like finance, healthcare, and government. Here, enterprises typically believe that adopting third-party AI tools may expose sensitive data to external environments they cannot fully control. Perhaps this is because data protection laws have created a heightened sensitivity to where data is processed and how it’s used to train models. 

Take banks as an example – historically they have viewed data as a fortress, a core asset to be guarded. Their culture of confidentiality and regulation makes them instinctively cautious about sharing information externally. Add to this the fact that large banks already have substantial internal technology infrastructures and budgets, and building seems logical on paper. The truth, however, is that building internally doesn’t eliminate compliance risk, but often amplifies it. This is because companies take on the burden of securing systems, updating controls, and managing ethical frameworks themselves.

On the other hand, buying from specialist providers means adopting a system that’s been engineered for compliance at scale. Purchasing doesn’t dilute compliance, it accelerates it, because you inherit the expertise and validation of teams who do this. In fact, most reputable AI vendors now far exceed enterprise compliance standards, designing privacy-preserving architectures that mitigate these risks far more effectively than in-house teams can, full-time.

Competitive Edge

The financial sector’s competitive edge increasingly lies not in owning the algorithms, but in applying them better and faster. Challenger banks and fintechs have embraced this: they buy tools (whereby anti-money-laundering and fraud detection platforms are incorporated into model-risk management protocols aligned with regulatory expectations), they integrate, and they move rapidly. Traditional banks, by contrast, are still in a transitional mindset, modernising legacy systems while trying to preserve control. That’s why their build programmes are often more about transformation theatre than tangible AI capability, and will ultimately see them fall further behind.

Underestimation of AI’s Lifecycle Cost 

Beyond the issues of legacy thinking, poor data quality and compliance risk, companies attempting to build in-house also face a number of additional challenges when it comes to the talent, time, and technical debt needed. 

  • Talent: True AI expertise is scarce and expensive. Competing with the open market for top data scientists and ML engineers is unsustainable for most enterprises. 
  • Time: AI doesn’t stop evolving while your internal team builds. By the time a prototype is ready, the underlying technology stack may have already advanced. 
  • Technical debt: Maintaining models, retraining on new data, and ensuring explainability and auditability over time all demand continuous investment. 

Most companies underestimate this lifecycle cost by an order of magnitude. Add to that the reputational risk of bias or error (especially when deploying AI in customer-facing contexts) and the true cost of internal builds can spiral quickly.

A Change in Mindset is Needed 

As more of these challenges surface, we should see an uptick in companies moving towards buying AI rather than building it – and it’s a pattern that’s thankfully already emerging. As AI becomes infrastructure, not novelty, enterprises will mirror the software evolution of the 1990s and 2000s: moving from bespoke builds to modular adoption. 

The early adopters that buy today will pull ahead dramatically because they can focus on application and differentiation, not on maintenance. In time, the ‘build’ approach will be seen much like writing your own word processor in 1995: a costly distraction from real innovation. 

Organisations need to shift from ownership to orchestration. This requires humility, recognising that innovation now happens outside corporate walls, and confidence – trusting that your value lies in how intelligently you deploy technology, not in whether you wrote its source code. Culturally, companies need to redefine ‘strategic advantage’ as agility plus insight, not possession plus control. AI isn’t an asset you own; it’s a capability you cultivate.

In simpler terms, the companies that thrive in the AI age will be those that treat AI as an ecosystem, not an ‘ego system’. 

Learn more at resolutiion.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Visa is leading the AI race in payments, according to Evident’s AI Index for Payments, a major new ranking of…

Visa is leading the AI race in payments, according to Evident’s AI Index for Payments, a major new ranking of AI adoption within the industry. 

The Index shows industry stalwarts Visa and Mastercard outpacing their peers and delivering tangible AI outcomes thanks to early investments in talent and innovation.

Behind them, PayPal (3rd), American Express (4th), Stripe (5th) and Block (6th) emerge as the challengers. They outperformed the Index average, but are yet to match the leaders’ scale of deployment and outcome disclosure.

AI Moving from Experimentation to Deployment

Over the past two years, the 12 payments companies in the Index have publicly documented nearly 100 AI use cases. Underscoring how rapidly AI has moved from experimentation to deployment across core payment workflows. It’s a landscape defined by constantly evolving fraud threats and rising customer expectations for faultless, high-speed processing. Evident notes that nearly a third of these use cases disclose measurable outcomes, including efficiency gains, risk reduction and revenue uplift.

“Payments firms adopted AI out of necessity long before many other industries – their business models demanded it. Companies who invested early – like Visa and Mastercard – have gained a clear advantage over their peers, both in AI capabilities and the value their deployments are realising.” Alexandra Mousavizadeh, Co-Founder and Co-CEO of Evident.

Talent, Innovation, Leadership and Transparency

The Evident AI Index for Payments provides the most comprehensive independent benchmark of AI maturity across the industry. It is based on publicly available data around four pillars critical to successful AI deployment: Talent, Innovation, Leadership and Transparency.

According to Evident, Visa’s lead is based on consistent performance across the four pillars. And because it demonstrates the clearest evidence that AI is institutionalised across its core transaction network. Visa and Mastercard show maturity in areas such as fraud detection, cybersecurity and network-level risk reduction. Visa stands out for the scale and measurable impact of a handful of large, multi-year deployments focused on the integrity and security of its entire ecosystem.

“Mastercard shows strong evidence of scaled deployment and quantified performance improvements. Particularly in areas like fraud detection and AML tracing,” continued Mousavizadeh. “But what sets Visa apart is the degree to which the company is demonstrating impact at scale over multiple years. From applications of AI across its operations and network. It signals a shift from individual use cases to AI as institutional capability.

“What the Index also reveals is the importance of consistent innovation to maintain competitive advantage. With relatively nascent industry players like Stripe and Block performing well – and showing their AI potential reflected in their valuations – the Index leaders cannot afford to drop off the pace.”

AI Impact on Show, but ROI Reporting Scarce 

Firms in the top half of the Index account for nearly 80% of use case disclosures (with the top three providing a significant 54%). Highlighting the link between AI maturity and the ability to scale deployment.

Visa performed strongly in this regard. For instance, its latest threat report disclosed advanced AI/ML blocked nearly 85% more fraud compared to one year prior. Similarly, when Mastercard incorporated Gen AI technology into its Decision Intelligence solution, initial modelling showed AI enhancements improved fraud detection rates from an average of 20% to as high as 300% in some instances.

However, Evident notes that no payments company has disclosed realised or projected ROI across all enterprise or group-wide AI activities. 

“The Index leaders are locked in a tight race at a point when the thinking around corporate AI adoption is shifting – away from chasing the biggest models to building technologies that solve real operational problems efficiently,” commented Annabel Ayles, Co-Founder and Co-CEO of Evident. “Against this backdrop, the absence of ROI disclosure – or any group targets for AI ROI – is increasingly conspicuous. Currently, 1-in-5 banks now report on group-level AI returns. However, payments firms have yet to quantify the aggregate impact of their AI investments. To keep justifying this expenditure, the market will sooner or later demand clearer evidence of value.”

A Hotbed of AI Talent

The Index also reveals that the average payments company has over 30% more AI-focused workers than other financial institutions, despite substantially smaller employee numbers. 

The three major card networks – Visa, Mastercard and American Express – account for nearly half (48%) of the payments industry’s AI talent stack. PayPal is currently the biggest employer, accounting for nearly a fifth (18%) of that AI talent.

PayPal’s AI talent has allowed it to build proprietary models tightly integrated with its data and workflows. Consequently, it accounts for nearly a quarter (24%) of the 98 AI use cases documented by its peers over the past two years – 1.7x as many AI applications as detailed by Visa or Mastercard.

“AI maturity is no longer defined by talent volume alone, and the Index leaders combine AI development, data engineering and product capabilities in ways that allow them to move rapidly from model experimentation to production deployment,” concluded Ayles.

The Evident AI Index Methodology

The Evident AI Payments Index ranks the AI maturity of 12 of the largest payment networks and processors across the globe. These 12 entities were chosen by aggregating the largest payment companies, with a minimum of $2B in annual revenue. 

It is an independent, ‘outside-in’ assessment based exclusively on publicly available information. Each company was assessed against 60+ individual indicators, organised into four pillars critical to successful AI deployment at scale: Talent (45% weighting), Innovation (30%), Leadership (15%) and Transparency of Responsible AI activity (10%).

Data is gathered through a combination of extensive manual research and proprietary machine learning tools that extract key data points from company reporting and public disclosures (including press releases, investor relations materials, group-level website pages, group-level social media accounts, and media interviews with senior leadership), as well as a range of third-party data platforms.

Further information on the methodology of the Index can be found at evidentinsights.com

  • Artificial Intelligence in FinTech
  • Digital Payments
  • Neobanking

Adam Spearing, VP of AI GTM EMEA at ServiceNow, on why those that invest in AI foundations now will shape their operating models on their own terms

Much of the debate around AI still centres on pilots: which tools to test, which use cases to prioritise, which risks to manage. Executive teams commission proofs of concept, establish governance forums and assess compliance exposure. Far less scrutiny is applied to the consequences of waiting.

Traditional technical debt is familiar territory for CIOs. It stems from shortcuts, ageing platforms and deferred upgrades. It builds over time and is eventually addressed through structured modernisation programmes. Visible in legacy code, brittle integrations and manual workarounds. It appears on risk registers and capital plans. Leaders know how to describe it and, in principle, how to resolve it.

Forward-looking technical debt is different. It arises when organisations postpone the foundational changes needed for new ways of working. It is not created by past expediency, but by present hesitation. And it accumulates faster.

AI Adoption

In the context of AI, the effects are already emerging. Each quarter spent debating readiness instead of building it increases the distance between legacy operating models and AI-enabled competitors. As models improve and user expectations shift, that distance widens, reshaping competitive baselines. What begins as a modest capability gap can harden into structural disadvantage.

While companies debate whether to adopt AI, the margin for strategic choice narrows. Many organisations frame AI adoption as a binary decision: adopt now or wait until the technology matures further. In practice, the room for discretion is smaller than it appears. Time spent stalled in pilots or governance loops increases the gap between internal capability and market expectation.

More than 75% of organisations are expected to face moderate to severe AI-related technical debt in 2026, predicts Forrester. The issue will not simply be missed efficiency gains. It will be structural misalignment between how their systems operate and how work is increasingly done.

This misalignment often appears gradually. Teams rely on manual data preparation because underlying systems cannot support automation. AI tools are layered onto fragmented architectures and deliver inconsistent outputs. Employees experiment with external tools because internal platforms cannot provide the functionality they need. Each workaround creates further fragmentation.

Over time, these patterns compound. Integration backlogs expand. Security and risk teams struggle to enforce consistent controls across proliferating tools. Data governance becomes reactive rather than designed. What began as caution begins to constrain strategic options.

The AI Paradox

Here’s the paradox: organisations are either rushing into unsuccessful AI pilots that create immediate technical debt, or they’re avoiding AI entirely and creating forward-looking debt through inaction. Both paths lead to the same place – systems that can’t support the future of work.

AI isn’t just another technology layer to bolt onto existing infrastructure. It’s fundamentally changing how people interact with systems and how work gets done. Increasingly, AI becomes an interface through which employees access information, execute tasks and navigate processes. When AI becomes the interface – not just for customers but for employees navigating their daily tasks – organisations without AI-ready foundations will find themselves unable to compete on speed, efficiency, or experience.

The companies that hesitate aren’t just missing out on automation benefits today. They’re building a deficit that grows exponentially as AI capabilities advance. Each new model release, each competitor’s successful implementation, each customer expectation shift adds to the debt. Each significant model improvement raises the performance benchmark across the market. Unlike legacy systems that degrade slowly, this gap accelerates.

From Avoidance to Advantage

Breaking free from forward-looking technical debt requires a fundamental mindset shift. This isn’t about buying more technology or launching more AI pilots. It’s about creating the conditions for sustainable AI adoption that builds capability rather than complexity.

The organisations succeeding with AI aren’t the ones with the biggest budgets or the most aggressive rollouts. They’re the ones that took a deliberate, phased approach to ensuring their data, systems, and culture could support AI at scale. They treated readiness as an operational discipline rather than an innovation side project. They understood that AI adoption isn’t a destination, it’s a continuous capability that requires solid foundations.

This starts with honest visibility into current technology estates. Leaders must understand what systems can realistically support AI workloads, where data quality creates barriers, and which processes are ready for automation. Only then can organisations introduce AI incrementally, modernising systems where necessary rather than forcing new capabilities onto brittle foundations. Without that clarity, AI risks being layered onto structural weaknesses.

Modernisation therefore becomes targeted. Consolidating fragmented workflows, standardising data models and reducing unnecessary integration points increase the feasibility of scaling AI across multiple use cases. Early deployments focused on well-defined processes with clear data lineage can build internal confidence while strengthening governance practices.

Clear Debt to Stay Competitive

Forward-looking technical debt does not appear on a balance sheet. It shows up in slower product cycles, manual workarounds, integration backlogs and frustrated employees. It surfaces when competitors deliver AI-assisted services as standard and customers begin to expect the same everywhere. By the time these symptoms are visible, the underlying gap has already widened.

Timing therefore becomes a strategic variable. AI capability builds cumulatively: early investment in clean data, modern workflows and interoperable systems creates a base for continuous improvement. Each iteration becomes easier, faster and more reliable. Those that delay face the opposite trajectory: increasing complexity, rising retrofit costs and shrinking room for strategic choice.

The real issue is not adoption in principle. It is whether leadership teams are prepared to treat readiness as urgent rather than optional.

Reducing forward-looking technical debt requires acting before competitive pressure dictates terms, aligning technology modernisation with operating model reform, and accepting that disciplined progress now is less risky than accelerated catch-up later.

AI adoption will continue irrespective of individual organisational hesitation. Vendors will continue to refine their offerings. Regulators will clarify expectations. Customers and employees will adjust their behaviours. Those that invest in foundations now will shape their operating models on their own terms. Those that delay risk reacting to a competitive gap that is already commercially significant.

Learn more at servicenow.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Adonis Celestine, Senior Director – Global Automation Practice Lead at Applause, on the rise of AI and why In a world of autonomous systems, trust is the ultimate competitive advantage

Every generation of technology has its defining disruptor – the force that rises above the rest and reshapes its environment. In the mid-2000s, Marc Andreessen captured the moment when digital systems began transforming entire industries with his famous line: “software is eating the world”. At the time, software was the apex predator of technology, defining how value was created and delivered. Today, that hierarchy has shifted. Artificial Intelligence (AI) has reached the top of the technology food chain. Not just accelerating software, but fundamentally reimagining how it’s created, tested, and deployed.

AI is no longer just a tool; it is a co-creator. Developers now rely on AI daily to translate high-level intentions into working code. A practice sometimes known as ‘vibe coding’. Tasks that once took months can now be delivered in weeks, days, or even minutes. The pace is exhilarating, but it introduces challenges that traditional quality assurance (QA) practices were never designed to meet. And if QA cannot keep up, speed will come at the cost of reliability and trust.

When AI Outpaces QA

Conventional QA depends on predictability. Features are defined, code is written, and test cases verify the expected behaviour. However, AI disrupts this traditional model. Generative and Agentic AI systems don’t simply follow instructions; they interpret them. These systems adapt to context, learn from data, and can produce different outputs from the same prompt, influenced by factors such as training, temperature settings, and the model’s probabilistic nature. With development cycles now measured in minutes, traditional QA handoffs are often impossible.

This has led to a growing gap between speed and certainty. Teams can ship products faster than ever, yet it’s becoming much more difficult to ensure consistent, ethical, or safe behaviour in real-world conditions. Enterprises are already experiencing AI-powered features that fail in ways conventional testing could not anticipate, undermining trust and creating new risks.

Hidden Risks in Autonomous AI Workflows

AI-driven development introduces blind spots that traditional QA often struggles to detect. One key issue is context drift. This occurs when AI performs well in controlled testing environments but behaves unpredictably when faced with edge cases, cultural differences, or ambiguous inputs. For example, a customer-facing chatbot might pass functional tests but produce biased or misleading responses when deployed on a global scale.

Another challenge is compound autonomy. When multiple AI agents are involved in code generation, testing, and deployment, the system may begin to validate its own processes. Without human oversight, errors can propagate unnoticed. An AI agent might ‘approve’ certain behaviours because they statistically align with previous outputs. Rather than meeting user or business expectations.

Invisible change also complicates QA efforts. AI models continuously evolve through processes like retraining, prompt tuning, or data updates. A feature that worked flawlessly last week may function differently today. Traditional regression testing often fails to capture these subtle but significant shifts.

Most critically, AI workflows blur the lines of accountability. When failures occur, it can be unclear whether the issue lies with the model, the data, the prompt, the integration, or the deployment pipeline. QA teams must continuously validate not only the outputs but also the decision-making processes behind them.

Redefining Quality and Trust in an AI World

Slowing AI development is neither practical nor beneficial. Organisations must redefine quality in a probabilistic, AI-driven environment. Quality now extends beyond just correctness. It involves ensuring that systems operate reliably in real-world scenarios. This shift requires moving from static test cases to continuous, adaptive validation.

QA teams must evolve into ‘quality intelligence’ teams, broadening their responsibilities from simply detecting defects to actively fostering trust in AI systems. AI-assisted testing is crucial in this process. It can automatically generate extensive test cases by analysing requirements and code patterns. It can predict defects using machine learning. Detect visual inconsistencies across devices, and produce realistic, privacy-compliant synthetic test data. Additionally, Agentic AI can autonomously maintain and self-heal test scripts, adjusting their logic as underlying code or user interfaces change.

Furthermore, AI systems themselves need rigorous evaluation. Techniques such as red teaming, rainbow teaming, benchmarking, bias and ethics checks, and drift monitoring are essential to help promote AI’s reliability, fairness, and alignment with business objectives.

Human oversight is critical. While AI can scale testing and automate numerous tasks, critical thinking, risk assessment, and judgment cannot be fully delegated. Humans must guide, validate, and refine AI outputs to maintain both quality and trust.

Emerging Roles and Responsibilities

AI is reshaping professional roles. Developers are increasingly using AI by instructing machines through natural language rather than traditional programming methods. This shift has led to the emergence of new roles such as AI agent orchestrators, prompt engineers, QA specialists for autonomous systems, and governance leads who ensure ethical and auditable AI practices.

These roles are essential for maintaining human oversight. Developers and testers must experiment, validate, and continuously refine AI outputs while being cautious not to rely too heavily on AI.

Trust in the Age of the Apex Predator

As with any apex predator, AI has changed the rules of the game. Software once “ate the world” by making systems programmable. Today, AI “eats software” by making it autonomous, capable of creating, modifying, and deploying autonomously. In this new environment, speed is no longer the ultimate measure of success; trust is. Systems may move fast, but without rigorous QA, ethical oversight, and human judgment, they may not be reliable, accurate or ethical.

The new apex predator demands adaptation. Organisations navigating this AI-driven era must embrace automation and innovation, but pair it with strong quality practices, governance, and continual human oversight. Only by combining these elements can companies ensure their AI systems are not only fast and efficient but also dependable and aligned with business objectives. In a world of autonomous systems, trust is the ultimate competitive advantage.

Learn more at applause.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Tom Lanaway is Head of Innovation at Connective3, a global brand & performance marketing agency. He leads a team building AI-powered marketing measurement and marketing intelligence tools.

Most businesses are asking the wrong question about AI. They’re asking, ‘Which AI tool should we use?’ They should be asking: ‘Can our people actually think with AI?’ 

I run an innovation team at a marketing agency. We’ve spent the last two years building AI into everything we do, including measurement, content, strategy, and automation. We’ve got lots of tools, 18 different products to be precise. 

Below is what I’ve learned. But the tools aren’t always the bottleneck; sometimes the skills are. 

The Tennis Racket Problem 

A colleague put it perfectly recently: “AI is a tool. Think of it as if you’ve got a smart assistant sat there. But it’s saying, I’m going to give you the best tennis racket, now go and play in a Grand Slam.” 

That metaphor stuck with me because it captures something the artificial intelligence hype cycle keeps missing. We’ve convinced ourselves it democratises everything. That anyone can now do anything. That the barrier to entry has collapsed. And there’s truth in that, but it’s incomplete. The barrier to access has collapsed, but the barrier to effectiveness hasn’t. Give someone GPT-4, and they can generate text. Give them the best tennis racket, and they can hit a ball. But the gap between hitting a ball and playing at Wimbledon is still vast. Most organisations are stuck in that gap, wondering why their AI investments aren’t transforming anything. 

Three Skills That Aren’t Always Present 

When I look at where teams struggle and where I see the same patterns across other businesses, three specific competencies keep showing up as gaps: 

1. Problem Decomposition 

Not everyone knows how to break down complex work into chunks that AI can help with. This sounds simple, but it isn’t. Most people approach AI with whole tasks such as ‘Write me a marketing strategy’, ‘Analyse this data’ Or ‘Create a campaign’. AI will then produce something, but it’s usually mediocre, because the person hasn’t done the harder work of understanding which specific parts of that task AI is good at, and which parts need human judgment. The skill isn’t using AI; it’s knowing what to give it. Someone who is brilliant at their job but can’t decompose problems will get worse results from AI than someone more junior who understands how to break work into the right pieces.  

2. Output Assessment 

How do you know if what AI gives you is good? This is where intuition becomes essential and it’s also where the ‘AI replaces expertise’ narrative falls apart. You need domain knowledge to evaluate AI output. You need enough experience to feel when something’s off, even if you can’t immediately articulate why. You need the pattern recognition that comes from years of doing the actual work. Artificial Intelligence doesn’t replace that intuition; it requires it. The best AI users I’ve observed aren’t the most technical; they’re the ones who’ve built up enough expertise in their field to quickly assess whether AI output is useful, directionally correct, or completely off base. They know what good looks like, so they can recognise it when they see it, or notice when it’s missing.

3. Articulation 

Can you clearly express what you really want? This is the unglamorous core of the whole thing. Some people struggle to articulate their requirements to other humans, let alone to AI. We’ve all sat in meetings where someone spends 20 minutes explaining what they need, and you’re still not sure what they want. AI makes that problem worse. The skill isn’t ‘prompt engineering’ in the technical sense; it’s the much older skill of clear thinking and clear communication. If you can’t articulate what you want specifically, precisely, with the right context and constraints, you won’t get useful output from AI or from anyone else. 

The Uncomfortable Implication 

Here’s what this means for how businesses should think about AI investment

Stop leading with tools: Most organisations have tool fatigue already. Another platform, another integration, another training session on which buttons to click. It’s not working. 

Start with the human work: Before asking ‘What AI should we use?’, ask ‘Can our people break down problems, assess output, and articulate requirements?’ If they can’t do those things well without AI, they won’t do them well with AI either. 

Invest in the skills, not just the access: This doesn’t mean AI prompt engineering courses; it means developing clearer thinking, better problem decomposition, and sharper articulation. These are old skills, applied to new tools. 

Accept that expertise still matters: The people who’ll use AI best are the ones who already know their domain deeply. AI amplifies competence; it doesn’t create it.

Connected Intelligence Isn’t About Connected Systems 

I’ve spent a lot of time thinking about how different marketing channels and data sources connect and how you build intelligence across systems rather than in silos.

But I’ve come to think the more important connection isn’t between systems, it’s between human judgment and AI capability. The integration layer that matters most is the one between the person and the tool. 

Get that wrong, and it doesn’t matter how sophisticated your AI stack is. Get it right, and even basic tools become powerful. 

Learn more at connective3.com

  • AI in Procurement
  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • People & Culture

Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams…

Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams to identify and respond quickly, whilst also meeting regulatory timeframes for handling complaints and supporting vulnerable customers.

Netcall: AI-Powered Sentiment

The specialist bank has worked with Netcall to deploy AI-powered sentiment analysis using Netcall’s Liberty Create platform. The solution reduces manual effort and improves operational efficiency by bringing customer emails from multiple mailboxes into a single interface. Incoming messages are automatically analysed to identify dissatisfaction, highlighting cases that may require faster intervention. This allows urgent cases to be prioritised, helping HTB to resolve issues before they escalate and improve the customer experience.

“Our AI-powered sentiment analysis solution rapidly processes vast amounts of email data. Its efficiency allows our team to focus on resolving customer enquiries and issues rather than sorting priorities. The streamlined process ensures swifter responses and better customer outcomes, upholding our reputation for exceptional customer service.” Ed Eames, Head of Customer Savings Operations at Hampshire Trust Bank.

The application was built by the Hampshire Trust Bank development team using Liberty Create. It worked closely with Netcall to integrate AI sentiment analysis into existing processes. Customer-facing teams were involved throughout to ensure the solution aligned with established workflows and regulatory requirements.

Customer Service Control

A key benefit of the approach is the level of control it gives internal teams. Keywords, sentiment thresholds, and classifications can be adjusted directly. This allows rapid refinement as customer behaviour changes or new regulatory considerations emerge, without waiting for development cycles.

“Liberty Create has enabled my development team to work with remarkable agility. The ability to rapidly create and refine applications to meet ever-evolving business needs has significantly enhanced our efficiency. This allows us to deliver a wealth of new features to end users and customers with speed. With the integration of AI, we’ve been able to advance our processes while ensuring exceptional customer service. Our Sentiment Analysis application launch is a prime example of this.” Trina Burnett, Head of Engineering at Hampshire Trust Bank.

The sentiment analysis system also supports automated and ad-hoc reporting. This provides a single source of insight into customer interactions and actions taken. This helps reduce manual effort, supports audit and compliance activity, and enables teams to continuously improve customer service operations.

“As scrutiny around customer experience and accountability increases across UK financial services, the ability to listen, adapt and respond at pace is becoming a defining capability for banks seeking to maintain trust and service standards,” said Alex Ballingall, Key Account Manager at Netcall.

“HTB’s approach shows how banks can use AI-driven insight practically. Turning customer communications into faster action without adding operational complexity,” Ballingall concluded.

About Netcall

Netcall is a leading provider of low-code and customer engagement solutions. A UK company quoted on the AIM market of the London Stock Exchange. By enabling customer-facing and IT talent to collaborate, Netcall takes the pain out of big change projects. It helps businesses dramatically improve the customer experience, while lowering costs. Over 600 organisations in financial services, insurance, local government and healthcare use the Netcall Liberty platform to make life easier for the people they serve. Netcall aims to help organisations radically improve customer experience through collaborative CX.

Learn more at netcall.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy
  • Fintech & Insurtech
  • InsurTech

Gregory Mostyn, CEO and co-founder of Wexler, on why the era of generalist AI tools is over, and how the future will focus on high-precision AI designed for specific industries

For decades, the UK’s professional services sector, including areas such as Law, Insurance, and Wealth Management, has argued that its business value is locked in its access to proprietary data and the specialised labour required to navigate it. Investors, lured by the moat of institutional knowledge, priced these companies accordingly. However, the first quarter of 2026 has seen significant AI disruption within the professional services market. The catalyst wasn’t a single event, but rather a move by foundational model providers that turned the industry’s most defensible assets into commodities. 

When Anthropic launched its specialised legal AI plugin, OpenAI integrated a real-time insurance underwriting engine directly into its interface, and Alturist Corp automated bespoke tax strategies, the market reacted harshly. As professional services titans such as RELX, MoneySuperMarket, and St James’s Place saw their share prices decline by more than 10% in a matter of hours, the message became clear: the era of treating AI as a ‘future risk’ is over. 

The market has been awoken to the fact that foundational AI models are no longer just plugins or nice ‘add-on’ tools; they are competitors. The move by foundation-model providers into professional services – like the legal sector – is not a one-off shock, but rather an inevitability. 

The Proliferation of Information 

Historically, a law firm’s competitive advantage was its access to information – repositories of case law, proprietary research, and historical contracts. Investors and clients valued these companies on the assumption that this data constituted an impenetrable barrier to competitors. Before AI entered the mainstream, the cost of extracting actionable information from thousands of pages of data required a small army of junior associates and hundreds of billable hours. 

In 2026, that moat has mostly evaporated. Recent benchmarks show that frontier models now achieve 80% accuracy on complex documents, compared with the 71% average of a human associate. More importantly, they do it at a fraction of the cost. It is now estimated that the inference cost for a system at the level of GPT-3.5 dropped by more than 280-fold between November 2022 and October 2024. It’s predicted that UK law firms will reduce their chargeable hours by 16% through the implementation of AI. 

The narrative that AI would be able to handle only ‘low-level’ tasks, such as NDAs or simple contract summaries, has all but evaporated. Anthropic’s move into high-stakes litigation support validates this trend. 

AI – From Swiss Army Knives to Scalpels 

An error made by many law firms when AI became entrenched within the market was to treat it as a ‘plug-in’, a nice-to-have built onto existing internal software. Many adopted general-purpose tools, often referred to as ‘Swiss Army knife’ solutions, that covered the breadth of legal work but lacked the precision, jurisdictional nuance, and risk-weighted requirements for high-stakes professional services. 

The 2026 market reaction highlighted the needs of a ‘scalpel’ approach – those that go deep in a specialised vertical within a legal workflow. For example, instead of a junior associate spending billable hours searching through case files to establish the facts of a case, they could use a ‘fact intelligence’ platform that can automate that process into minutes, whilst increasing accuracy by 95% versus 78% for human reviewers and up to 90% savings in large-scale litigation. The market is no longer rewarding firms for having information. Rather, it rewards those who can apply it at the lowest possible cost and friction. 

Reallocating Capital Across Professional Services

We’re already seeing investors withdrawing from the traditional software market and reallocating that capital into specialised AI firms. However, the risk for legacy players is that they are being disrupted from both ends. From the bottom, they are losing the efficiency game to generalist foundation models from companies such as OpenAI and Google, which are commoditising the ‘knowledge’ aspect of professional services, including basic advice and contract drafting. At the top, they are losing the expertise game to specialised firms that use AI as a precision instrument; their overhead would be lower than that of a traditional Magic Circle firm, allowing them to undercut prices while maintaining profit margins. 

The result is a massive reallocation of capital. Investments into vertical AI (AI built for one specific industry) are expected to surge to $115 billion by 2034. The market no longer bets on labour with tools, but on autonomous workflows. Investors have realised that the value lies in the middle layer – the software that sits between a general foundation model and a specific industry’s needs. 

Innovation or Obsolescence 

So far, the first market fluctuation of 2026 has taught us that you cannot outrun new technologies. To survive, firms must stop treating AI as an add-on and treat it as a foundation for their core business infrastructure. 

For UK professional services, the choice is no longer whether to adopt AI, but whether they can evolve quickly enough to avoid becoming the training data for companies building foundational models. The firms that remain in 2030 will recognise that the competitive landscape has changed. You’re not just competing with your peers, but with the compute cycles of the world’s most powerful AI labs. 

The era of generalist AI tools is over, and the future will focus on high-precision AI designed for specific industries. 

Learn more at wexler.ai

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech

Adrian Wood, Strategic Business Development & Offer Marketing Director at DELMIA

The era of trial-and-error manufacturing is over. By integrating NVIDIA’s Physical AI into DELMIA’s Virtual Twin technology, Dassault Systèmes is moving the industry from static automation to autonomous software-defined systems that “learn” the laws of physics before the first part is made.

Revolutionising Manufacturing with Agile AI-Driven Production

Manufacturing is reaching a breaking point. Rigid production and logistics systems slow setup, ramp-up and scaling. Meanwhile deterministic automation struggles with real-world change, from new variants to unplanned constraints. The future is agile, software-defined production built on modular autonomous equipment, proven virtually and deployed with confidence.

Dassault Systèmes and NVIDIA are building the industrial AI foundation to make that future real. DELMIA contributes the virtual twin of production systems. A semantically rich model of production that connects design intent to real-world execution across engineering, manufacturing and supply chain. NVIDIA contributes physical AI and accelerated computing to simulate robotics-grade physics and perception at scale. Together, we can virtualise and orchestrate autonomous production systems. Then manufacturers can prove changes virtually and make them real faster, with less risk and rework.

This collaboration establishes a shared industrial AI architecture. This grounds artificial intelligence in the laws of physics and validated scientific knowledge. The integration of NVIDIA Omniverse physical AI libraries into the DELMIA Virtual Twin of global production systems represents a major step forward. It allows manufacturers to design, simulate and operate complex systems with a new level of confidence and precision. Not just incremental improvements; this partnership establishes a mission-critical system of record for industrial AI that powers a new way of working.

Virtual Twins: The Cornerstone of Modern Manufacturing

For years, manufacturers have optimised production lines in the physical world. While effective, this approach is often slow, resource-intensive and constrained by the cost of experimentation in live operations. Virtual twin technology changes this dynamic. A virtual twin is a science-based model of a system that goes beyond visualisation, enabling realistic validation of how operations should run before changes are made in the real world.

DELMIA empowers companies to create comprehensive virtual twins of their entire operational ecosystem. This includes everything from individual machines and robotic workcells to full factory floor layouts and global supply chains. Within this virtual environment, manufacturers can:

  • Simulate and validate production processes before a single piece of equipment is installed.
  • Optimise workflows for maximum throughput and efficiency.
  • Identify potential bottlenecks and safety hazards without disrupting ongoing operations.
  • Train operators and maintenance crews in a risk-free setting.

The virtual twin orchestrates design, engineering, production and supply chain in one environment so decisions can be tested, trusted and reused. This capability alone delivers significant value, but its impact grows when combined with physical AI.

Integrating AI for Autonomous Production

The partnership with NVIDIA brings physical AI into DELMIA virtual twins. NVIDIA Omniverse provides a platform for developing and operating 3D simulations and industrial digitalisation applications using OpenUSD-based interoperability. Combined with DELMIA’s production semantics, manufacturers can test autonomous behaviour in realistic conditions before deployment.

This is the shift from ‘mirroring reality’ to ‘proving change’. AI models accelerated by NVIDIA computing can evaluate scenarios across production constraints, resources and variability. They can help teams reduce commissioning surprises, improve flow and validate how production should respond to change, from new variants to disruptions.

The result is the emergence of software-defined production systems. These are factories and operations where decisions remain human-led, but are continuously supported by AI that recommends, tests and validates options in the virtual twin before changes are deployed. This creates a feedback loop where the virtual world is used to validate better outcomes for the real world.

A Practical Application: The OMRON Collaboration with DELMIA & NVIDIA Drive Real-World Success

To understand the real-world impact of this technology, consider the collaboration with OMRON, a global leader in industrial automation. OMRON recognizes that addressing the growing complexity of modern manufacturing requires a move toward fully autonomous and digitally validated production systems.

By combining DELMIA’s Virtual Twin of Production Systems, NVIDIA physical AI, and OMRON automation technologies, manufacturers can move from design to deployment with greater confidence. When a manufacturer introduces a new product variant or packaging change, automation often fails in small but costly ways, such as automation-grasping reliability, orientation on conveyors or downstream flow stability. Instead of trial-and-error changes on the line, teams can validate process logic, layout constraints and operating rules in the DELMIA virtual twin, then simulate realistic robot and material behaviour using NVIDIA’s AI before deployment. The result is faster adaptation and less physical rework.

The Top 3 Broader Impacts on Manufacturing

This fusion of virtual twin technology and industrial AI has far-reaching implications for the entire manufacturing sector including:

  1. Unlocking New Efficiencies: Software-defined production systems can continuously identify operational improvements that are difficult to see through manual oversight alone, improving throughput, uptime and overall performance while reducing avoidable losses.
  2. Advancing Sustainability Goals: By simulating processes in the virtual world, companies can minimize physical prototyping and reduce waste. AI-driven optimization within the DELMIA virtual twin helps manufacturers fine-tune their operations to consume less energy and use fewer raw materials, directly contributing to their sustainability commitments.
  3. Fostering Continuous Innovation: When the risk and cost associated with testing new ideas are lowered, innovation flourishes. Manufacturers can experiment with novel factory layouts, new automation strategies and different production workflows within the safety of the virtual twin. This agility allows them to adapt quickly to changing market demands and stay ahead of the competition.

The partnership between Dassault Systèmes and NVIDIA is about more than just combining two powerful technologies. It’s about establishing a new, scientifically validated foundation for industrial AI. By integrating NVIDIA’s physical AI libraries into DELMIA, we are empowering manufacturers to build the autonomous, efficient and sustainable factories of tomorrow, today.

  • Data & AI
  • Digital Strategy
  • Digital Supply Chain

Kevin Janzen, CEO of Gaming & EdTech AI Studio at Globant, on how AI will change the way games are made and expand the market

Every major games studio is now experimenting with artificial intelligence. From generating NPC dialogue to automating animation and video assets. AI is promising to speed up production and lower costs for developers.

According to Boston Consulting Group (BCG), the gaming industry finds itself at a crossroads…. Looking to gain the momentum it felt between 2017 and 2021, where revenue surged from $131 billion to $211 billion. And AI could be at the forefront of this pivotal moment. 

But as AI becomes central to how games are built, studios face a major challenge. Adopting automation without losing authenticity. For developers and retailers alike, this becomes a business concern that deserves close attention. Creativity sits at the heart of gaming, and the choices studios make today will influence what reaches players tomorrow. For the technology channel, this transformation means faster release cycles, broader product diversity, and a need for sharper forecasting.

A New Phase in Gaming’s Evolution

For most of gaming’s history, every era has been defined through visuals. Each generation has delivered stylistic, immersive worlds, such as the blocky charm of Minecraft to the cinematic realism of Red Dead Redemption 2. 

Now, the real change is happening behind the scenes. AI is reshaping how games are built and experienced. Development teams are using AI to handle time-consuming tasks such as vast world-building creation and animation. This frees artists to focus on what players remember – the design and storytelling.

Players are already seeing the benefits in their gameplay. AI lets games adapt or adjust difficulty based on players’ skill levels, or change dialogue based on a player’s choices. This makes gaming worlds feel realistic, responsive and more personal.

With budgets continuing to climb for gaming studios, these new features matter. AI gives studios breathing room to experiment. Smaller teams can take creative risks, and established developers can experiment and test new ideas without derailing production. However, efficiency and costs aren’t the only gains as AI is creating space for developers to be more ambitious than ever before.

Automation and Artistry

For all its promise, AI also brings creative risk. Gamers notice when a quest feels repetitive or when dialogue sounds mechanical. And if AI is used carelessly, developers risk losing authenticity.

That sense of care is what keeps players invested. Whether it’s hand drawn detail, or play-driven choices. Games like this show what happens when technology supports vision rather than replacing it.

That’s why the industry’s embrace of AI is such a gamble. Used well, AI can help developers create richer, more personalised worlds. But used carelessly, it risks stripping away the artistry that makes games memorable.

The Ripple Effect Across the Supply Chain

As AI becomes a standard tool, development processes are speeding up and opening new creative possibilities. Independent studios now have access to the kind of production power once limited to major developers. That shift means faster pipelines and ultimately, more games reaching the market.

For retailers and resellers, this brings both opportunity and pressure. A consistent stream of releases can guarantee sales across the year, while lower production costs encourage more niche or experimental games that appeal to new audiences. Greater variety and volume benefits the market, but it also makes it harder to predict which games will break through.

Players are becoming more aware of how games are made and AI’s role in development. They’re starting to ask not only how a game plays, but also how it was built. Understanding the intent behind a studio’s use of AI – one that uses AI as a genuine creative tool and those that rely on it as a shortcut – will help retailers anticipate demand and spot the games with long-term potential.

The Right Way to Play the AI Game

The studios using AI most effectively have a few things in common. They keep AI in the background, using it to manage routine work, such as generating textures and landscapes, so creative teams can focus on narrative and emotional tone.

They also use AI to make experiences more personal. Thoughtful application of adaptive systems allows games to respond to individual play styles, adjusting difficulty and pacing to keep players engaged. This level of design deepens engagement and gives players a sense that the world responds to them personally.

Another area where AI is also making an impact is making games more inclusive. More than 400 million people around the world play with a disability, and new tools are expanding access – from adaptive controls to real-time translation that lets players connect across languages. As gaming becomes more diverse, the audience grows for everyone, including retailers, who can reach a larger, more engaged customer base.

When automation complements gaming artistry, it strengthens the relationship and trust between the developer and the player. Creativity becomes the main focus again, and that’s what keeps players loyal.

Balancing Innovation and Trust

AI is fast becoming integral to how games are conceived, built, and experienced — and that shift will reshape the entire value chain. For developers, success will come from balancing automation with artistry, ensuring that AI enhances creativity rather than replaces it.

For retailers, distributors, and partners, this transformation offers both opportunity and responsibility. A faster, more diverse release pipeline will bring fresh sales potential, but also greater complexity in forecasting and curation. The winners in this new phase of gaming will be those who can spot titles where AI adds genuine depth, inclusivity, and player connection — not just production speed.

Handled thoughtfully, AI won’t just change how games are made, it will expand the market for everyone involved in bringing those experiences to players. That’s a game worth playing for the entire tech channel.

Learn more at globant.com/studio/games

  • Data & AI
  • Digital Strategy
  • People & Culture

JP Cavanna, Director of Cybersecurity at Six Degrees, on balancing the risks and benefits of AI in cyber defence strategies

Undeniably, AI is here to stay. Having become part of day-to-day life, it’s hard to remember what life was like without it. But when it comes to cybersecurity, is it causing more harm than good?

Recent research outlines that 73% of organisations have already integrated AI into their security posture. The technology is clearly becoming a cornerstone of modern cybersecurity. Organisations are turning to AI not just as a tool, but as a partner in security operations, leveraging its capabilities to identify malicious activity faster, guide investigations, and automate repetitive tasks.

For it to be truly effective, though, AI must be paired with human expertise – but this is where organisations are starting to become complacent. Given the growing sophistication of cyber-attacks, and even AI-powered attacks, many are removing the human element while expecting AI tools to do all the work for them, leaving them even more vulnerable to threats. This overreliance risks creating blind spots, where critical thinking, contextual understanding, and instinct are overlooked. Without the balance of human judgement, AI can amplify mistakes at scale, turning efficiency into exposure.

The Cybersecurity Paradox

This situation puts many organisations in a potentially difficult position. On the one hand, AI can significantly improve the efficiency of security operations. In the typical SOC, for example, AI technologies can process alerts in around 10-15 minutes. This represents a significant improvement over human analysts, who can easily require twice as long for the same task.

Aside from the obvious efficiency gains, applying AI to these repetitive, time-pressured processes can also significantly reduce the scope for human error. And in turn, take considerable pressure off security analysts. Going some way to battling alert fatigue, an increasingly well-documented and persistent problem. In these circumstances, valuable human experience and specialist expertise can instead be more effectively applied to complex investigations, strategic decision-making, and other higher-value priorities.

On the flipside, however, AI remains prone to generating inaccurate or misleading insights, and users may not realise they are applying the wrong information to potentially serious security issues. Similarly, habitual blind trust in AI outputs can easily erode performance levels and even introduce new vulnerabilities. There is also scope for sensitive data to enter public environments, with the potential to cause compliance issues. This kind of information can also reappear in future versions of the AI model in question, therefore resulting in further data exposure risks.

Parallels with IoT Adoption

The situation mirrors that seen in the early days of IoT adoption, where the rush to innovate would often override security considerations. In this current context, therefore, human oversight and vigilance are extremely important. Clear governance frameworks, defined accountability, and continuous monitoring must underpin any AI deployment. Therefore ensuring that innovation does not outpace risk management or compromise long-term resilience.

A Growing Arms Race

If that wasn’t challenging enough, threat actors are also in on the AI boom in what has already been described as an ‘arms race’. In practical terms, AI tools are already widely used to create more convincing phishing attacks free from some of the more obvious traditional tell-tale signs of criminal intent, such as imperfect grammar or a suspicious tone.

Deepfake technology has also raised the stakes. We’ve all seen how convincing AI-generated video has already become. This is now finding its way into real-world examples, with one fake video reportedly causing a CFO to authorise a large financial transfer as a result.

At the same time, technology infrastructure is constantly under attack by AI-powered tools. They can be used to analyse defensive systems and identify weaknesses faster than humans. The net result of these developments is that defenders constantly play catch-up, as they can only respond to new attack vectors once discovered. The underlying takeaway is that at present, AI cannot be trusted to operate autonomously. Instead, human intuition, scepticism and contextual understanding remain essential to spotting emerging tactics.

As attackers refine their methods at machine speed, organisations need to resist the temptation to match automation with automation alone. They must double down on strategic thinking and continuous skills development.

Balancing Benefits and Risk

So, where does this leave security leaders who are looking to balance the benefits and risks? Firstly, and to underline a fundamental point, while AI offers scale and speed, it cannot replace critical human oversight. Organisations should view AI as an enhancer, not a replacer. Success lies in promoting partnership, not substitution.

Strong governance is vital. This should start with clear AI usage policies that define what can and cannot be shared with AI tools, while proper data classification and access control ensure that sensitive information is protected. In addition, regular validation of AI outputs can help to prevent inaccurate or misleading results from being unnecessarily acted upon.

Then there are the perennial challenges associated with employee awareness training, which is vital for avoiding complacency and understanding the limitations of generative AI tools. Cyber leaders should also monitor how AI is being used inside and outside the corporate environment, as staff often experiment with tools on personal devices.

Get this all right, and security teams can put themselves in a very strong position to embrace AI, safe in the knowledge that they have the guardrails and processes in place to balance innovation and efficiency with effective human-led oversight. Ultimately, success will depend not on how much AI is deployed, but on how intelligently it is governed and refined alongside the people responsible for securing an organisation.

Learn more at Six Degrees

  • Artificial Intelligence in FinTech
  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy

A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and…

A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and rework continue to cost businesses over $67bn a year

Loopex Digital’s January 2026 analysis identified several common mistakes companies make when relying on AI.

1.  Giving AI Too Much Control in HR

AI-led hiring filters out 38% of top-level candidates before human review because it relies on keyword matching. Candidates respond by adjusting CVs to fit those words, often hiding real experience.

“When we started to use AI in our hiring process, we saw some strong candidates get rejected,” said Maria Harutyunyan, co-founder of Loopex Digital. “Out of 100 applicants, the 2 candidates that would’ve been hired didn’t make it because they used different wording instead of the exact keywords.”

How to fix this: “We simplified our job descriptions, removed buzzwords that didn’t matter, and limited AI to shortlisting. The quality of hires improved immediately,” said Maria.

2.  Trusting AI Notes Without Review

AI note-takers often struggle with background noise and poor audio, leading to inaccurate notes. In many cases, up to 70% of summaries focus on side comments rather than decisions.

“We tested 10+ AI note-takers across 50 of our regular meetings. Most of the main summaries ended up being jokes and half-finished sentences,” said Maria. “Key decisions were either unclear or missing entirely from the AI summary.”

How to fix this: “We limited AI notes to action points and decisions,” said Maria. “Everything else is filtered out or reviewed manually, cutting note clean-up from half an hour to minutes.”

3.  Letting Artificial Intelligence Replace Your Customer Support Team

When customers realise they’re speaking to AI, call abandonment jumps from 4% to 25%. Even when customers stay on the line, AI tools can get policy and pricing details wrong, leading to confusion, complaints, refunds, and extra clean-up work for support teams.

How to fix this: Use AI only for simple FAQs, not complex cases. Define clear escalation rules for cancellations, complaints, and legal issues and route those to a human immediately. Restrict your AI from creative responses in support, only letting it use approved templates.

  • Data & AI
  • Digital Strategy

Some Europe & Middle East CIOs anticipate up to 178% ROI on AI investments, with further efficiencies expected as Agentic AI scales

Enterprises have moved decisively from AI pilots to scaled implementations, driven by proven benefits and expectations of significant financial returns, according to the Lenovo Europe & Middle East CIO Playbook 2026 with research insights by IDC. Nearly half (46%) of AI proof-of-concepts have already progressed into production, with organisations projecting average returns of $2.78 for every dollar invested.

The 2026 Lenovo CIO Playbook: The Race for Enterprise AI, draws on insights from 800 IT and business decision makers in Europe and the Middle East. It captures a regional inflection point and reinforces the value proposition for enterprise AI as both real and immediate. It calls on CIOs to act now to avoid lagging competitors. The research marks a clear shift from AI experimentation to measurable value creation, with nearly all (93%) of those surveyed planning to increase AI investments in the next 12 months. At an average spending growth rate of 10%, and 94% anticipating positive returns.

Enterprise AI Adoption in Europe and the Middle East

AI is now recognised as a core engine of business reinvention and competitive advantage. However, AI adoption in the markets is progressing at different speeds. Reflecting varying levels of digital maturity, regulatory readiness, and investment capacity, and there is a clear overconfidence problem among CIOs. While 57% of organisations in Europe and the Middle East are approaching or already in late-stage AI adoption, only 27% have a comprehensive AI governance framework. Further limitations in data quality, in-house expertise, integration complexity, and organisational alignment are causing a mismatch between ambition and readiness.

With Agentic AI overtaking Generative AI as the top priority for CIOs in 2026, these factors will prevent many organisations from fully capitalising on AI’s potential, leaving significant returns unrealised. Moreover, 65% of organisations are focused on scaling Agentic AI across their operations within 12 months, but only 16% report significant usage today, with the majority still piloting or actively exploring use cases.

More advanced markets such as Scandinavia, Italy, and the UK are moving beyond pilots, with a majority of organisations already systematically adopting AI and increasing focus on hybrid and edge deployments to support scale. In contrast, parts of Southern and Eastern Europe remain earlier in their AI journeys, with a higher proportion of organisations still in planning or early development stages. Meanwhile, the Middle East is emerging as a fast-moving growth market, showing strong adoption momentum and a sharp year-on-year increase in interest in advanced and Agentic AI.

Across the region, hybrid deployment models dominate as organisations balance innovation with data sovereignty and operational control. While interest in Agentic AI is accelerating. This signals a broader shift from experimentation toward more autonomous, production-ready AI use cases, even as readiness levels continue to vary by market.

“We’re now seeing clear returns from the AI pilots and proof-of-concepts organizations have invested in, with AI delivering measurable impact across the region. But many are not fully equipped with the skills, governance and readiness needed to scale AI to its full potential. As priorities shift toward Agentic AI, and compliance with regulation such as the EU AI Act becomes imperative, trust and scale must be built in from the start. Those who don’t, risk leaving tangible returns on the table.”

Matt Dobrodziej, President of Europe, Lenovo

Hybrid AI Now Preferred Enterprise Architecture

The research shows that real-world business and financial considerations are accelerating the shift toward hybrid AI. Factors such as data privacy, advanced security requirements, and the need to customise and optimise infrastructure are driving adoption of this model, which blends public cloud, private cloud, and on-premises compute. Nearly three out of five (58%) organisations now prefer hybrid as their primary AI deployment model.

Scalable, high-performing AI infrastructure is a critical enabler of enterprise AI success. Respondents in the region highlighted the importance of compute that is both cost- and energy-efficient. This factor ranked second overall, with many identifying it as key to moving AI from pilots into reliable production.

With AI PCs and edge endpoints central to an effective Hybrid AI strategy and securely running AI workloads locally, deploying AI-capable devices has emerged as the top IT investment priority for 2026.

“CIOs across the region are entering a decisive phase of AI adoption where agentic AI and enterprise-scale inferencing are moving from experimentation to core business priorities,” said Dobrodziej. “To unlock real value, organisations need strong foundations, including secure, energy-efficient infrastructure, flexible hybrid architectures, and AI-capable devices and edge endpoints that bring inference closer to where data is created, and work happens. When combined with the right governance and services, this end-to-end approach enables enterprises to innovate confidently, responsibly, and at scale.” 

Lenovo recently introduced Lenovo Agentic AI, a full-lifecycle enterprise solution for creating, deploying, and managing AI agents, alongside Lenovo xIQ, a suite of AI-native platforms designed to simplify and operationalise AI across the enterprise. Built on the Lenovo Hybrid AI Advantage™, these offerings combine hybrid infrastructure, platforms, and services to address governance, integration, and performance from day one. Supported by the Lenovo AI Library of proven use cases, CIOs can reduce risk, accelerate time-to-value, and scale AI initiatives with greater confidence as they move beyond experimentation.

To further enable real-world deployment, Lenovo ThinkSystem and ThinkEdge inferencing servers help enterprises turn trained models into production-ready, low-latency AI applications across data center, cloud, and edge environments. By enabling faster, more efficient inference at scale, Lenovo helps CIOs bridge the gap between AI ambition and day-to-day business impact.

Building on this end-to-end AI foundation, Lenovo’s Smarter AI for All vision is focused on bringing AI to more people and businesses at scale, from enterprise infrastructure to AI PCs that deliver intelligent, personalised experiences directly to users. As outlined at Lenovo Tech World at CES 2026, Lenovo is advancing this vision across its AI PC and smartphone portfolio, with Lenovo and Motorola Qira representing one example of how personal AI can enhance productivity by understanding context across devices and helping users get things done.

Learn more about how enterprises can accelerate AI adoption with the right infrastructure, governance, and partnerships:Explore the full 2026 CIO Playbook report.

About the CIO Playbook Study

This is the third year of surveying CIOs in Europe and the Middle East, with Lenovo commissioning IDC which conducted research between 16th September 2025 and 17th October 2025. This year’s report draws on insights from 800 IT and business decision makers in Europe and the Middle East. Industries represented include: BFSI, Retail, Manufacturing, Telco/CSP, Healthcare, Government, Education and others.

About Lenovo

Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). To find out more visit https://www.lenovo.com, and read about the latest news via our StoryHub.

  • Data & AI
  • Digital Strategy

Ash Gawthorp, CTO and Co-founder of Ten10, on building the right foundations to shape the AI era in the UK

A recent study shows that UK businesses expect to increase their AI investment by an average of 40 percent over the next two years, following an average spend of £15.94 million this year. With investment surging, the UK is clearly in the fast lane, but the question is whether that momentum will convert into real, durable strength.

This rapid acceleration places the UK at a pivotal moment in its ambition to lead in artificial intelligence. Investment is rising, government focus is strengthening, and organisations across every sector are exploring AI at pace, creating a sense of real momentum. However, anyone who has experienced previous technology cycles will recognise the familiar tension that emerges during periods of rapid progress and optimism. Breakthroughs often attract significant attention and capital before entering a more grounded, sustainable phase.

The pressure today is not on AI as a whole. Instead, it is focused on a specific path, where belief in ever-larger transformer models delivering general intelligence continues to grow. This progress has been remarkable, but it represents only one path within a much broader AI landscape. As excitement reaches its peak, the market will inevitably stabilise. The long-term value will come through robust engineering, strong talent pipelines, and successful deployment in real-world environments.

The task now is to use this moment wisely. Long-term success depends on building deep capability at home, rather than relying on hype or outsourcing key foundations to external providers that sit outside our oversight and control.

The Limits of Scale as Strategy

A significant share of today’s investment is based on the assumption that increasing compute and model size will inevitably lead to artificial general intelligence (AGI). Transformer architectures have delivered extraordinary capability and accelerated progress in ways few predicted. They remain powerful systems for prediction and pattern recognition across language, images and other data.

However, scale is not a guarantee of general reasoning or broad intelligence. Many researchers believe that transformative progress may require developments beyond today’s dominant architecture. If that proves correct, the markets surrounding large closed models will experience a natural cooling. This would be an adjustment based on speculative expectation, not a failure of AI as a discipline. The industry would then shift toward approaches that prize clarity, modularity and measurable outcomes. Engineering discipline and architectural flexibility will matter far more than sheer size.

One Architecture Cannot Become a National Dependency

AI will continue to advance. The question for the UK is whether it builds capability that can evolve alongside that progress, or whether it locks itself to a narrow set of global platforms. A handful of model providers currently influence pricing, model behaviour and development cycles. When enterprises rely entirely on opaque APIs, they inherit changes without knowing why outputs shift, how models adapt or when pricing dynamics move. That introduces fragility that grows over time.

Some experimental use cases can tolerate opacity, but critical public services and regulated industries cannot. Lending, diagnostics, fraud detection and other high-stakes applications demand clarity over how decisions are formed and how logic stands up to scrutiny. In those environments, transparency and auditability shift from abstract ideals to essential operational requirements.

If the UK intends to embed AI deeply into essential systems, it must champion architectures that allow observability, explainability, control and replacement. Dependence on decisions made offshore is not a foundation for long-term strength.

Specialised Agents Reflect How Sustainable Systems Evolve

A practical and resilient approach to AI is already taking shape. Rather than depending on a single model to handle every task, organisations are assembling systems made up of specialised components. This mirrors the way effective teams work, where roles are defined, responsibilities are clear, and handovers are structured. One model transcribes speech, another classifies information, and a third retrieves or summarises content. Each performs a focused function that can be observed, validated and improved.

This modular design makes systems easier to maintain and evolve. New components can be adopted without rewriting entire frameworks. If performance changes or drift appears, individual parts can be evaluated or replaced without widespread disruption. This reflects long-standing engineering principles that value clarity, observability and the ability to substitute components when better options emerge.

Financial efficiency supports this approach as well. Running powerful frontier models for every interaction introduces cost and latency that scale quickly. Task-specific agents can often deliver the same outcome faster and more economically. Across thousands of interactions, the savings and performance gains become significant.

Engineering as the Anchor of Trustworthy AI

As AI becomes embedded in real systems, success relies on foundational engineering practices. Observability, continuous testing, performance monitoring and controlled deployment are essential. These are not new concepts created for AI, but long-established techniques that have been adapted to a new class of technology.

In early exploratory phases, it can be tempting to treat large models as something separate from traditional software systems. However, the moment AI begins to influence real decisions, the fundamentals return. Enterprises must be able to trace behaviour, explain recommendations and ensure consistent reliability, while regulators expect clarity and boards seek evidence-based decisions around technology choices, cost structures and risk.

Organisations that approach AI as engineered infrastructure, rather than a mysterious capability, will be far better equipped to scale safely and confidently.

Building Skills that Make Capability Real

The UK is fortunate to have strong research institutions, a sophisticated regulatory mindset and a robust software talent base. To convert these strengths into durable national advantage, investment in skills must expand beyond narrow data expertise. Data scientists remain crucial, but sustainable AI delivery depends equally on software engineers, cloud specialists, machine learning specialists, testers, governance experts and operational teams who run systems at scale.

Leading organisations recognise that AI delivery is a multidisciplinary effort. As architectures become more modular, value will flow from those who can integrate, monitor and guide AI systems responsibly. The UK must ensure that thousands of professionals have access to this training and experience. Real leadership emerges when capability is widely shared, not concentrated in a small group.

Governance that Accelerates Innovation

Strong governance does not slow innovation. It accelerates meaningful adoption by building confidence. When organisations can demonstrate transparency, control and reliability, AI can extend into more critical functions.

For national strategy, this becomes a competitive advantage. Industries that manage financial and clinical outcomes are not resistant to technology. They simply require evidence that systems behave consistently and transparently. If the UK excels in building AI that is observable, testable and replaceable, trust will grow and adoption will move faster.

Shaping a Resilient AI Future

Every technology cycle begins with excitement and eventually settles into maturity. Those who succeed through this transition are the ones who invest in capability while enthusiasm is high. When the current market resets, leadership will belong to those with engineering depth, system agility, responsible governance and the skills to integrate specialised intelligence across complex environments.

The UK has an opportunity to define this standard. Strength will come from transparency, interoperability and the ability to adapt to model and architecture changes without disruption. It is a quieter strategy than making declarations about imminent artificial general intelligence, yet it builds the resilience required to lead over the long term.

The future will reward systems that can evolve, remain auditable and operate securely at scale. With the right foundation, the UK can shape this era of AI not through scale alone, but through excellence in engineering, governance and talent. That foundation is the true measure of AI power, and now is the moment to build it.

Learn more at ten10.com

  • Data & AI
  • Digital Strategy

Joe Logan, CIO at iManage, on the need to avoid the hype, manage cybersecurity, focus on ROI and balance change management to get the best results with AI

Across the enterprise, AI promises transformational power – however, it’s not as simple as just plugging it into the organisation and instantly reaping the benefits. What are some of the top things CIOs need to focus on to avoid any pitfalls, unlock its value, and best position themselves for success with AI? 

1) Separate the Hype from Reality

Here’s what hype looks like: using AI to “radically transform the way you do business” or to “accelerate comprehensive digital transformation” or – heaven forbid – to “completely change our industry.” These are big statements – and absolutely dripping with hype.

Getting real with AI requires identifying specific use cases within the organisation where a particular type of AI can be deployed to achieve a specific goal. For example, maybe you want to reduce customer churn by 20% and have identified an opportunity to use chatbots powered by large language models to provide more effective customer service. That’s what reality looks like.

In separating the hype from reality, organisations gain the added benefit of clearing up any misconceptions – at any level of the organisation – about what AI can and can’t do, thus performing an important “level set” around expectations.

2) Understand the Implications for Cybersecurity

On one side, any AI tool you’re using has access to data, and that means that access needs to be controlled like any other system within your tech stack. The data needs to be secured and governed, and issues around privacy, sovereignty, and any other regulatory requirements need to be thoroughly addressed.

As part of this effort, organisations also need to be aware of the security measures required to protect the AI model itself from bad actors trying to manipulate that model. For example: prompt injection – inputs that prompt the model to perform unintended actions – can affect the model and its responses if not carefully guarded against.

Securing your AI system is one side of the coin; the other side is understanding how to apply AI to cybersecurity. There are a growing number of use cases here where AI can help identify risks or vulnerabilities by analysing large amounts of data, helping organisations to prioritise the areas they need to focus on for risk mitigation. 

In summary? While any usage of AI will require you to “play defence” on the security front, it will also enable you to “play offence” more effectively. In that sense, AI has multiple implications for cybersecurity.

3) Focus on the Right Kind of ROI

When it comes to ROI for any AI investments, don’t narrowly focus on absolute numbers when it comes to metrics like time savings or cost savings. While well-suited to industrial workplaces that are churning out widgets every day, absolute numbers can be an awkward fit when applied to a knowledge work setting.

The advice here for any knowledge-centric enterprise is: Don’t get hung up on the idea of actual dollars and cents or a specific number – instead, look for a relative improvement from a baseline. So, rather than saying “We’re going to reduce our customer acquisition costs by $100,000 this year”, it’d be more appropriate to focus on reducing existing customer acquisition costs by 10%. Likewise, don’t focus on each junior associate in the organisation completing five more due diligence projects per calendar year; look to complete due diligence projects in 30% less time.

4) Give Change Management its due

Change management has always mattered when it comes to introducing new technology into the enterprise. AI is no different: Successful adoption requires a focus on people, process, and technology – with a particular emphasis on those first two items.

A major challenge is educating the workforce with an eye towards improving their AI literacy – essentially, enabling them to understand what’s possible and how they can apply AI to their daily workflows. 

Know that a centralised model of control that dictates “this is how you can experiment with AI” is probably going to be ineffective. It will be too stifling for innovative individuals in the organisation. Far better to provide centres of excellence or educational resources to those people who are most inclined to take the initiative and move forward with AI experiments in their team or department. 

One caveat here: It’s essential to have guardrails in place as teams and individuals experiment with AI, to prevent misuse of the technology. That’s the tightrope that CIOs need to walk when introducing AI into the organisation. Striking the right balance between “total control” and “freedom to explore, but with appropriate oversight and guardrails”. 

The Future of AI Depends on what CIOs do next

The promise of AI is massive, but only if CIOs adopting the technology focus on the right areas. And that means filtering out the hype, keeping security implications top of mind, redefining ROI, and guiding change with a steady hand. By paying attention to these areas, CIOs can safely navigate a path forward with AI. And ensure that it isn’t just a technology with promise and potential, but one that delivers actual enterprise-wide impact.

Learn more at iManage

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age

The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

Design Under Pressure

The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

Exploring the Physical Gap

There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

Risk in the Seams

The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

Physical Security Under AI Conditions

As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

Infrastructure as an Adaptive System

The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

Discover more at vertiv.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Jan Van Hoecke, VP AI Services at iManage and a highly experienced computer scientist with a passion for technology and problem-solving. on navigating the AI landscape for success in 2026

The AI landscape faces a number of big shifts in 2026. Agentic AI will undergo a reality check as enterprises discover the gap between marketing hype and actual capabilities, while organisations will go through a mindset change from treating AI hallucinations as crises to managing them, acknowledging the inherent limitations of the technology. There will also be a shift in how data will be structured in AI systems, to help the move from just finding facts (“what”) to understanding reasons (“why”).  Middleware application providers will face new challenges, as those vendors controlling both platforms and data will become more influential. Finally, standardised AI chat interfaces will evolve into smarter, dynamically generated, task-specific user experiences that adapt to immediate needs.  

Agentic AI Reality Check  

2026 is the year when agentic AI will get a reality check, as the gap between marketing promises made in 2025 and their actual competencies will become starkly visible. As enterprise adopters share the mixed successes of agentic AI, the market will begin to differentiate between true autonomous agents and the clever workflow wrappers.

Currently, many products promoted as AI agents are, in reality, rigidly programmed systems that simply follow predefined paths. They cannot independently plan or adapt in real-time to accomplish tasks. The current evolution of AI agents closely resembles the development of autonomous vehicles: early self-driving cars could only maintain lane position by relying strictly on preset instructions, and likewise, today’s AI agents are limited to executing narrowly defined tasks within established workflows. True autonomy, where AI agents can dynamically perform and solve complex problems better than humans and without human intervention, remains, for now, an aspirational goal.

AI Hallucination Goes from Crisis to Management

In 2026, the AI hallucination crisis will reach a critical juncture as organisations realise they must learn to coexist with the current fundamentally imperfect technology – until a new technology comes into play that can effectively address the issue. The focus will shift from AI hallucination ‘crisis’ to management.

As the industry deliberates who carries the liability for AI’s mistakes and inaccuracies – the tool makers or the users – enterprises will stop waiting for vendors to solve the problem and take matters into their own hands. They will adopt a variety of pragmatic risk mitigation strategies – from double and triple-checking work, and enforcing human oversight for high-stakes decisions, to taking hallucination insurance policies.

Major model builders acknowledge that current foundational LLM technology cannot eliminate hallucinations and ambiguity through incremental improvements alone. New technology is needed. Until then, and perhaps with the realisation that a technological breakthrough is years away, users will start driving the hallucination conversation – both by building systematic defenses within how they use AI, and forcing vendors to accept shared responsibility through better documentation and clearer model limitations.  

The Next Evolution in AI Data Architecture Lies in a Shift from “What” to “Why”

There will be a fundamental shift in how data is structured for AI systems, driven by the limitations of current approaches in answering complex questions. While Retrieval Augmented Generation (RAG) has proven effective at locating information and answering “what” questions, it struggles with the deeper “why” and “how” inquiries.

This limitation stems from RAG’s flat-file architecture, which excels at locating information but fails to capture the complex interconnections and relationships that underpin meaningful understanding and knowledge, especially in specialised domains like legal and professional services information.

The solution lies in AI-driven autonomous structuring of data. These systems will be better placed (than humans) to reveal critical relationships across multiple data points at scale, also highlighting the contextual dependencies essential for answering the “why” and “how” questions effectively.

Consequently, in 2026, with machines taking the lead, the method of structuring data will undergo a complete transformation, gradually eliminating the human role in creating structure, to reveal the business-critical interconnections across multiple data points.

Middleware AI Apps Squeeze

Given the essential link between data and AI, middleware companies that specialise in building custom applications layered on top of data platforms will begin to get pushed to the margins, forced to compete on niche features – while the core value of data and insight is captured by the platform owners. The true leaders will be those organisations that both own and manage their data, while also offering an AI-powered interface that enables users to interact with their data securely and efficiently, fully leveraging the capabilities of modern AI technology.

Shift to AI-generated, Task-Oriented User Interfaces

In 2026, the current traditional vendor-designed, standard AI chat-based user interfaces will transition to dynamically AI-generated task-specific user interfaces that adapt to users’ immediate needs. This represents a fundamental shift from standardised software – for example, where everyone uses identical Microsoft Word or SharePoint interfaces – to personalised, short-term user interfaces that exist only as long as the user requires them for a specific task.

This transformation will also address the critical pain point that users typically have – i.e, the crushing cognitive load of navigating bloated, feature-rich software. Instead of searching through endless menus in an overstuffed application like Excel, the user will simply state their goal – “Compare the Q3 and Q4 sales figures for our top 5 products and show me a chart” – and the AI will instantly generate a temporary, purpose-built interface – a “micro-app” – solely designed for that one single task.

In the context of dynamically generated user interfaces, both data storage and the creation of bespoke interfaces will be managed by AI. The AI organisations that will truly lead in providing such bespoke user interface-generating capability are those that possess and control their own data.

About iManage

iManage is dedicated to Making Knowledge Work™. Our cloud-native platform is at the centre of the knowledge economy, enabling every organisation to work more productively, collaboratively, and securely. Built on more than 20 years of industry experience, iManage helps leading organisations manage documents and emails more efficiently, protect vital information assets, and leverage knowledge to drive better business outcomes. As your strategic business partner, we employ our award-winning AI-enabled technology, an extensive partner ecosystem, and a customer-centric approach to provide support and guidance you can trust to make knowledge work for you. iManage is relied on by more than one million professionals at 4,000 organisations around the world.

Learn more at imanage.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Santo Orlando, Practice Director – App, Data and AI Services at Insight, on how your organisation can level up with Agentic AI

By now, most of us have heard of Generative AI. Many businesses have already adopted the technology for tasks like customer service, code generation and content creation. Generative AI, however, is only the start; we’re only scratching the surface of the potential that AI has to offer

Enter Agentic AI

Unlike Generative AI, which relies on human input and prompts, Agentic AI can act autonomously to fulfil complex tasks without human intervention. As a result, nearly 45% of business leaders think Agentic AI will outpace Generative AI in terms of impact, and more than 90% expect to adopt it even faster than they did with generative AI. However, despite its promise, our joint understanding of Agentic AI – and how to implement it – is still very much in its infancy.

So, where do you start? To kickstart your Agentic AI journey here are five fundamental steps to consider. 

Generative AI vs Agentic AI

If Generative AI is like having a personal assistant, supporting you one-on-one to speed up your tasks, then Agentic AI is more like having a dedicated team of smart, individual coworkers who can take initiative and get things done across your business – without needing constant oversight. 

One powerful example of this in action is in sales. With Agentic AI, organisations are able to receive real-time insights during discovery calls. The AI ‘agents’ allow sales reps to respond with timely, relevant information, helping them build trust, operate faster and close deals more effectively. 

By collecting and analysing data from across teams, agents can uncover patterns, translate complex metrics into actionable strategies and even highlight opportunities that might otherwise be unintentionally overlooked. In some early implementations, sales teams have reported saving five to ten hours per rep each month – adding up to thousands of hours redirected toward deeper customer engagement.

The one-to-one relationship we’ve grown accustomed to with Generative AI has evolved into the one-to-many dynamic of Agentic AI, which is capable of handling tasks for multiple users and automating entire business processes. Even more impressively, agents can make decisions, control data and take actions on their own. A capability that can seem daunting without a clear understanding of how it works.

That’s why businesses need to start small, and here are a few practical steps to get going quicklyand wisely with agentic AI. 

Step 1: Getting your data ready

Agentic AI is the logical progression for organisations already exploring generative tools. However, the data needs to be in an optimal condition – clean, organised and secure – before autonomous agents can be deployed effectively.

As such, eliminating redundant, outdated and trivial (ROT) data is vital. Without removing ROT, agents may rely on obsolete information, leading to inaccurate or misleading outputs. For example, this could happen if a company deploys an HR chatbot that’s connected to outdated data sources. If an employee were to ask about their 2025 benefits, the chatbot might pull information from as far back as 2017, resulting in confusion and misinformation.

Proper file labelling, standardised document practices and use of version histories in place of multiple saved versions helps to ensure agents access only the most relevant and accurate information.

Step 2: Start with low-risk cases 

Agents work on a transactional basis, charging for each operation, which can quickly add up. As such, it’s wise to experiment with simple, low-stakes applications first. This approach allows for quicker deployment and demonstrates immediate value to the business without significant costs or risks.

One example could be using an agent to assess sentiment in social media responses following a product launch. This can offer real-time feedback on public perception and inform messaging strategies. Other low-risk use cases include generating reactive press releases and monitoring competitor websites. Additionally, prioritising automation of routine tasks, especially those involving platforms like Salesforce, SharePoint, or Microsoft 365, allows teams to maximise impact without costly system overhauls. 

Overall, organisations need to be willing to fail fast and expect failure. It won’t be perfect from the start. However, an experimental pilot approach helps to efficiently refine AI agents, reducing the risk of costly mistakes and making sure that only effective solutions are scaled up.

Step 3: Create a single source of truth

Establishing a dedicated, cross-functional team to explore agentic AI use cases helps prevent siloed adoption and supports enterprise-wide visibility. This team should span as much of the organisation as possible and include representatives from departments such as marketing, finance and technical solutions.

Collaborative workshops can then act as a forum to identify key processes that would benefit from autonomous capabilities and help businesses align potential applications with specific departmental objectives and broader business goals.

Step 4: Learn, learn and learn

Many companies underestimated the importance of training and governance with Generative AI – and Agentic AI is no different. Organisations need to establish clear governance to define how AI agents should and shouldn’t be used, covering not just technical implications, but HR, compliance and risk concerns as well.

Equally, businesses and those employed must understand Agentic AI’s full functionality to get the most out of it. Like with almost all technical training, AI education cannot be viewed as a one-time ‘tick-box’ exercise. Ongoing learning is necessary to keep pace with new capabilities and best practices.

For example, consider what’s already emerging, like security agents that automate high-volume threat protection and identity management tasks; sales agents that find leads, reach out to customers and set up meetings; and reasoning agents that transform vast amounts of data into strategic business insights.   

Step 5: Reviewing ROI

Enthusiasm around Agentic AI is high. But before organisations dive in headfirst, it’s important they first define success. Technology can’t be the solution if there is uncertainty surrounding the goal. Successful deployment requires a clear definition of the problem organisations are looking to solve and knowledge of how to align the solution with measurable business value. Without this, initiatives risk stalling at the experimental stage.

Key performance indicators should also be identified early. These may include increased productivity, time savings, cost reduction or improved decision-making. Establishing these benchmarks and taking a data-driven approach ensures that AI initiatives align with business goals and demonstrate tangible benefits to stakeholders.

Moving forward

The process of switching to Agentic AI is about changing how businesses handle everyday problems with wide ranging effects, not just about using cutting edge technology. Iteration and learning along the way, as well as deliberate, measured adoption are the keys to increasing value. It’s simple. Success with AI starts with small, straightforward actions and use cases.

Learn more at insight.com

  • Data & AI
  • Digital Strategy

Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2025, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation

Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast. 

Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly. 

Game-Changing Innovation 

Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.

To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer. 

Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:

Enhancing customer experience

Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand. 

    Powering day-to-day procedures

    One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise. 

      Minimising waste 

      AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources. 

        According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory. 

        For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial. 

        Laying down the foundations

        Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out. 

        Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts. 

        SMB leaders looking to implement AI first need to ask the following:

        What can AI do for me? 

        Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout. 

        Can my business manage its data? 

        AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way. 

        What about regulation?

        The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible. 

        Embracing Innovation

        This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind. 

        The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget. 

        By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape. 

        About ANS

        ANS is a digital transformation provider and Microsoft’s UK Services Partner of the Year 2025. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations. With a strong commitment to community, diversity, and inclusion, ANS aims to empower local talent and contribute to the growth of the Northwest tech ecosystem. Understanding customers’ needs is at the heart of ANS’s approach, setting them apart from any other company in the industry. 

        The ANS Academy is rated outstanding by Ofsted and offers in-house apprenticeships across a range of technology disciplines. ANS has supported more than 250 apprentices to gain qualifications in the last decade via apprenticeships across technology, commercial, finance, business administration and marketing. 

        ANS owns and operates five IL3‐accredited data centres in Manchester and has an ecosystem of tech partners including Microsoft (Gold Partner), AWS, VMWare, Citrix, HPE, Dell, Commvault and Cisco. It is one of the very few organisations to have received all six of Microsoft’s Solutions Partner Designations. 

        Find out more at ans.co.uk

        • Artificial Intelligence in FinTech
        • Data & AI
        • Digital Strategy

        Cathal McCarthy, Chief Strategy Officer at Kore.ai, on why now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference

        The generative AI boom has triggered a wave of enterprise experimentation. From proof-of-concepts to customer-facing AI Agents, which can be launched at pace but too often in isolation. This comes as MIT’s latest report finds that only 5% of Generative AI pilots are successful, with the majority failing due to poor integration with enterprise systems and in-house implementations without engagement with expert vendors.

        As adoption grows, so does the call for accountability. Control and centralisation is more important than ever. Siloed operations and experimentation pilots have meant that there are a trail of disconnected tools, incomplete experiments and sometimes confusion within enterprises of where AI is being used and who is using it, meaning it can’t be governed effectively.

        Now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference. The state of play today shows where clear changes are needed.

        AI Islands

        In a recent report from Boston Consulting Group and Kore.ai, 80% of AI leaders say they now favour platform-based strategies over scattered deployments. These platforms are not just about efficiency; they’re quickly becoming the only viable model for visibility, scalability and governance.

        The consequences of fragmentation are starting to show. CIOs and CTOs are sounding the alarm on siloed AI solutions that make it harder to measure impact, manage risk, or move quickly. This is often the case when AI tools and solutions are implemented in-house and without proven expertise.

        These ‘AI islands’ are hard to govern, expensive to integrate and nearly impossible to scale responsibly. More than half surveyed in the report say current AI solutions are slowing them down and nearly three-quarters highlight explainability and compliance as top concerns. Clearly, connecting these AI islands together via a common platform can offer more long-term benefits such as better governance, faster time to market, and cost consolidation.

        Regulation Demands New Architecture

        Where governance could have been considered a final step by some, it now has to be a design principle from the outset. Transparency, auditability, and oversight must be built into the very fabric of how AI is developed, deployed and monitored.

        Take the EU AI Act for example, the world’s first broad AI law, now applying to general-purpose AI models from August 2nd, 2025. The rules aim to boost transparency, safety and accountability across the AI value chain while preserving innovation.

        According to the BCG report, 74% of leaders believe new regulations will significantly influence how they roll out AI across their organisations. And for good reason. Fragmented systems don’t just introduce inefficiency, they create gaps that regulators, stakeholders and customers are not ready to accept.

        For all the talk of regulation as a constraint, it’s also an opportunity. Regulations should be seen as catalysts, rather than roadblocks. Companies that ensure governance is hard-wired into their AI projects don’t just avoid risk, they create greater trust. And this means greater adoption. This is what leaders need to see, as increased adoption of AI products ensures sustainable, long-term growth.

        Enterprises in industries holding sensitive and personal data like BFSI, healthcare and retail, are already adopting a platform-based approach. Not only does this ensure integration across the business but also means it future proofs compliance, meeting industry and government regulated standards today but also building in parameters for upcoming regulations.

        Gaining Control

        Adopting a platform model doesn’t limit creativity. And it doesn’t mean sacrificing flexibility. Instead of juggling multiple tools, you get one place to plug in what you’ve built and get the best of what’s out there. By running all of your AI capabilities under one unified platform and set of guardrails, your teams across the organisation move forward with one framework, which means, they move faster, make quicker decisions and have a clear understanding of what is – and isn’t – working.

        Most importantly, a platform turns compliance into a competitive and operational advantage. You can swap models, scale pilots and grow without silos tripping you up, and bring centralised control. This momentum is crucial for scaling and growing an organisation. Platforms create the foundation to scale AI responsibly and effectively and that’s key for future-proofing AI projects and creating impact that matters.

        • Data & AI
        • Digital Strategy

        Interface hears from Emergn CTO Fredrik Hagstroem on approaches to AI best practice that can drive positive business transformations

        What does it actually mean for an organisation to be AI-ready, beyond having the right tools and data

        “Being AI-ready is fundamentally about openness to learning and the ability to react quickly. While having the right tools and well-managed data is essential, true readiness is defined by an organisation’s capacity to operate, monitor, and measure the effectiveness of AI solutions.

        We often see organisations invest heavily in implementation and tooling, only to realise that no one is prepared to take responsibility for running, monitoring, and improving AI systems.

        AI-savvy organisations design solutions differently depending on the type of work, operational versus knowledge work, and, for knowledge work, focus on measuring effectiveness rather than just productivity.”

        Where do most companies go wrong when trying to embed AI into their operations?

        “Many companies treat AI solutions like traditional IT projects, using user acceptance as a checkpoint between development and handover to IT operations. This approach often fails before it even begins.

        AI performs tasks that typically require human intelligence, perception, reasoning, and decision-making. While AI can execute these tasks with far greater precision and consistency than humans, someone within the organisation remains ultimately accountable for the results.

        The most common misstep is underestimating the need to provide users with the right level of oversight and control so they can accept accountability for AI-driven decisions.

        For example, explaining how AI decisions are made and demonstrating that they are ethical and fair depends not only on transparency and traceability but also on maintaining control and proper training data records.”

        How can leaders prevent transformation fatigue during AI-driven change initiatives?

        “Change is inevitable, so responding to it is part of effective leadership. AI will transform how businesses operate, but transformation fatigue arises when people feel constantly subject to change rather than in control of it.

        Deliberate planning and thoughtful communication help, but the most effective approach is to empower people to feel more in control. This often involves organising teams around value streams that cut across business, technology, and operations.

        Leaders can ensure teams have the skills and information necessary to take ownership of outcomes and make adjustments based on real results. This is especially important with AI solutions, which should be structured to provide continuous feedback, allowing teams to monitor performance, improve models, and refine processes based on learning.”

        What kind of mindset and cultural shift is required for AI to deliver long-term value?

        “Delivering long-term value from AI requires a shift from control to collaboration, and from predictability to adaptability. Organisations focused on individual targets and siloed accountability often struggle to realise AI’s full potential.

        Value emerges when teams adopt a collective mindset, defining success by shared outcomes, whether customer experience, business impact, or strategic growth. Individual productivity only matters when it benefits the whole system.

        Another critical shift is embracing uncertainty. Traditional corporate cultures often reward certainty and fixed plans. Cultures that support experimentation, feedback loops, and incremental change are more likely to see lasting benefits from AI.

        This cultural evolution isn’t just about tools; it’s about how work is structured, how teams interact, and how decisions are made. Empowering teams to act fast, learn fast, and improve fast is central to sustaining AI-driven value.”

        How can organisations balance AI experimentation with maintaining trust, transparency, and alignment with business goals?

        “Each AI initiative should be evaluated based on the type of work and value it aims to deliver, whether efficiency, experience, or innovation. Different goals require different levels of oversight and distinct success metrics, making a portfolio approach to investment essential. Maintaining alignment with business goals means focusing on outcomes rather than outputs.

        This requires systems where feedback, transparency, and learning are built in from the start, allowing initiatives to fail gracefully. Trust begins with a clear governance framework, as AI, like any transformative technology, can have unintended consequences. Transparency is not just audit trails; it’s about inviting dialogue, sharing lessons learned, and adapting as standards and regulations evolve.

        Experimentation and learning go hand in hand. Delivering incremental value early builds credibility and transparency, helping teams understand what works and what doesn’t. Ultimately, AI is only valuable to the extent that it drives the business toward its strategic goals.”

        How do organisations deal with some of the risks associated with AI – hallucinations, privacy issues, etc. – and how do they go about both securing essential data and overcoming employee resistance to the technology?

        “Treating AI adoption as an iterative, feedback-driven process is key to managing risks. Success is less about getting everything perfect from the start and more about structuring work to minimise unintended consequences and adapt quickly.

        “Hallucinations” is a misleading term. Today’s AI doesn’t imagine things; it follows programmed rules based on probabilities and patterns. Like any software, AI carries risks of errors or mismanaged data.

        What is new is how AI uses data, to train models that imitate human decision-making. Without careful management, models can produce biased or unethical outcomes. Technology does not remove employee accountability. Recognising this allows organisations to design AI solutions with lower risk.

        Designing solutions with humans in the loop is critical. It promotes transparency and explainability and is the most effective way to overcome resistance while maintaining control over outcomes.”

        Find out more from Emergn

        • Data & AI
        • People & Culture

        Join thousands of attendees in Dubai for the 2nd annual Artificial Intelligence & Data Science conference and find out what’s new in Data & AI

        Attend one of the leading international conferences aimed at gathering world-class researchers, academics, industry experts, and students to present and discuss the recent innovations in Artificial Intelligence (AI), Machine Learning, and Data Science. As technology increasingly transforms industries and societies globally, this conference offers a valuable chance to exchange ideas, share knowledge, and build collaborations. These will define the future of intelligent systems and data-driven decision-making. Register for tickets now!

        Artificial Intelligence & Data Science – The Conference Program

        The program of the conference aims to offer both theoretical and practical viewpoints with keynote talks by global experts, oral and poster sessions, panel sessions, exhibitions, and courses. Participants will be able to learn about the latest methods in AI and Data Science from real-world use cases. Join discussions regarding the ethical, social, and technological issues involved with using AI in various fields from healthcare, finance and education to retail, transportation and smart cities.

        Expected Take-Aways:

        • Technical Insights & Deep Learning
        • Future-Ready Competencies
        • Actionable Tools & Recipes
        • Business & Strategic Frameworks
        • Network & Collaborations
        • Visibility & Recognition
        • Confidence & Vision
        • Career Development & Leadership Skills

        Networking in Dubai

        The host city, Dubai, also lends a unique flavour to the conference. As a world-renowned centre of innovation, business and technological advancement, Dubai is known for its world-class infrastructure and international accessibility. It’s the perfect platform for international collaboration. In addition to professional interaction, delegates can also sample the city’s cultural diversity and lively atmosphere, complementing their conference experience.

        Among the key objectives of the conference is to ensure networking and cooperation among the attendees. Researchers, practitioners, students, and policymakers can meet, learn from each other, and discover possible partnerships that stimulate innovation. Students and young professionals learn from mentorship, exposure to new technologies, and the opportunity to showcase their work to the world. Industry attendees learn about the latest trends and solutions that guide strategic decision-making and competitive edge.

        Artificial Intelligence & Data Science is a gateway to knowledge, cooperation, and innovation. It provides participants with the tools, networks, and intelligence needed to succeed in the fast-changing technological landscape.

        If you are a researcher, professional, student, or policymaker, attending the Artificial Intelligence & Data Science Conference 2026 in Dubai is an unbeatable chance to help shape the future of AI and Data Science across the globe. Register for tickets now!



        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • People & Culture

        Samsung and OpenAI Announce Strategic Partnership to Accelerate Advancements in Global AI Infrastructure

        Samsung will bring together technologies and innovations across advanced semiconductors, data centres, shipbuilding, cloud services and maritime technologies

        OpenAI, Samsung Electronics, Samsung SDS, Samsung C&T and Samsung Heavy Industries have announced a letter of intent (LOI) for their strategic partnership to accelerate advancements in global AI data centre infrastructure and develop future technologies together in relevant fields. This expansive collaboration will bring together the collective strengths and leadership of Samsung companies across semiconductors, data centres, shipbuilding, cloud services and maritime technologies.

        The signing ceremony was held at Samsung’s corporate headquarters in Seoul, Korea, attended by Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics; Sung-an Choi, Vice Chairman & CEO of Samsung Heavy Industries; Sechul Oh, President & CEO of Samsung C&T; and Junehee Lee, President & CEO of Samsung SDS.

        Samsung Electronics

        Samsung Electronics will work with OpenAI as a strategic memory partner to supply advanced semiconductor solutions for OpenAI’s global Stargate initiative. With OpenAI’s memory demand projected to reach up to 900,000 DRAM wafers per month, Samsung will contribute toward meeting this need with its extensive lineup of high-performance DRAM solutions.

        As a comprehensive semiconductor solutions provider, Samsung’s leading technologies span across memory, logic and foundry with a diverse product portfolio that supports the full AI workflow from training to inference.

        The company also brings differentiated capabilities in advanced chip packaging and heterogeneous integration between memory and system semiconductors, enabling it to provide unique solutions for OpenAI.

        Samsung SDS

        Samsung SDS has entered into a potential partnership with OpenAI to jointly develop AI data centre and provide enterprise AI services.

        Leveraging its expertise in advanced data center technologies, Samsung SDS will collaborate with OpenAI in the design, development and operation of the Stargate AI data centers. Under the LOI, Samsung SDS can now provide consulting, deployment and management services for businesses seeking to integrate OpenAI’s AI models into their internal systems.

        In addition, Samsung SDS has signed a reseller partnership for OpenAI’s services in Korea and plans to support local companies in adopting OpenAI’s ChatGPT Enterprise offerings.

        Samsung C&T and Samsung Heavy Industries

        Samsung C&T and Samsung Heavy Industries will collaborate with OpenAI to advance global AI data centers, with a particular focus on the joint development of floating data centers.

        Floating data centers are considered to have advantages over data centers because they can address land scarcity and lower cooling costs. Still, their technical complexity has so far limited wider deployment.

        Building on their proprietary technologies, Samsung C&T and Samsung Heavy Industries will also explore opportunities to pursue projects in floating power plants and control centers, in addition to floating data center infrastructure.

        Starting with the landmark partnership with OpenAI, Samsung plans to fully support Korea’s goals to become one of the world’s top three nations in AI and create new opportunities in the field.

        Samsung is also exploring broader adoption of ChatGPT within the companies to facilitate AI transformation in the workplace.

        About OpenAI

        OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.

        About Samsung Electronics Co., Ltd.

        Samsung inspires the world and shapes the future with transformative ideas and technologies. The company is redefining the worlds of TVs, digital signage, smartphones, wearables, tablets, home appliances and network systems, as well as memory, system LSI and foundry. Samsung is also advancing medical imaging technologies, HVAC solutions and robotics, while creating innovative automotive and audio products through Harman. With its SmartThings ecosystem, open collaboration with partners, and integration of AI across its portfolio, Samsung delivers a seamless and intelligent connected experience.

        • Digital Strategy

        Collaborating with Amdocs has been a game-changer for Telkom. Here’s why.

        As telecom companies race to adopt generative AI, a critical shift is underway – from generic copilots to deeply verticalised, telco-grade agents. Amdocs, in collaboration with AWS and NVIDIA, is leading this evolution with its amAIz Agents – introducing a new class of AI agents built specifically for the telecom industry.

        Unlike general-purpose AI, verticalised agents are built with domain-specific knowledge, reasoning, and telco ontology that reflect the complexity of telecom operations. These agents understand service plans, billing structures, and network topologies, enabling them to deliver context-aware responses and take meaningful action.

        Amdocs, NVIDIA and AWS released a publication that defines and showcases how AI agents can be tailored for specific telecom domains, illustrating the concept of ‘agent verticalization’ and its impact on operational efficiency and customer experience. These domain-specific agents, across every telco domain like care, sales, network, and marketing, work in coordination, enabling end-to-end automation and intelligent customer engagement through seamless orchestration.

        In the whitepaper, AI Verticalization for Telco’, Amdocs outlines the essential traits of telco-grade agents such as composable architecture, reasoning, and agentic experience, and enterprise-grade traits such as trust, security, and cloud-native scalability. 

        Amdocs: Three decades as a key transformation partner

        It’s a rare thing, in the fast-paced world of technology, for partnerships to last decades. However, for Telkom, Amdocs has been by its side for almost 30 years. The latter has played a critical role in supporting both mobile and wireline operation through its B/OSS platforms. These platforms are regarded as industry leaders, and Telkom has been able to navigate major shifts with Amdocs’s help, from legacy to next-gen digital stacks.

        “We have been in this game for some time, being the digital backbone of choice for South Africa, really, Amdocs has been a strategic partner of Telkom for over 30 years,” says Dr Noxolo Kubheka-Dlamini, Chief Digital and Information Officer at Telkom. “We have a shared goal of delivering a better, faster, and more seamless experience to our customers. What stands out about Amdocs is their deep domain expertise, strong delivery capabilities, commitment to our success, and ability to evolve with our ambitious goals. We see them as an extension of our own teams.”

        Read the full Telkom and Amdocs story in the latest issue of Interface Magazine.

        Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery…

        Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery required addressing fragmented data and disconnected tools that can slow the flow of information between systems. SSEN Transmission sought a partner to help reshape its approach for data-driven execution on capital projects.

        Meeting the Digital Challenge with Accenture

        SSEN Transmission partnered with Accenture to embrace automation and digitisation in response to increasing project demands, a challenge reflected across the wider Capital Projects sector. Through the adoption of BIM (Building Information Modelling) and the implementation of Integrated Project Management (IPM), which was developed with Oracle and Microsoft, this collaboration laid the groundwork for more connected ways of working and continues to promote transformation across the organisation.

        Key Benefits Delivered

        Accenture supported with IPM (Integrated Project Management) and Building Information Modelling (BIM) customised to meet specific needs and achieve key goals: 

        • Digitise processes for a single unified environment
        • Unify data for a standardised and trusted source of truth
        • Create a scalable platform for delivering capital projects

        “With a unified real-time view of project data, SSEN Transmission has improved efficiency and strengthened collaboration across internal teams and with external partners. This allows for more time focused on higher value insight-led work, supporting better outcomes, faster decisions and much more agile delivery”

        Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI

        Building for the Future

        More than a solutions provider, Accenture helps with strategy and issupporting SSEN Transmission’s continued focus on refining best practice for smooth project delivery. The partnership is helping to evolve ways of working and strengthening the digital foundation for future readiness.

        “Our collaboration is built on a strong digital foundation that can scale with SSEN Transmission’s growing needs. By unifying systems, data, and process, we are enabling the faster adoption of new capabilities and supporting the shift towards a fully data-driven capital project delivery”

        Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure

        Accenture: A Partner for the Journey

        Transformation is a journey that begins with the right foundation across people, data and process. It also requires a digital partner that brings together the best of industry experience, process excellence and technology to:

        • Develop a clear, actionable strategy for digital and data transformation
        • Embed industry best practices to optimise processes and drive continuous improvement
        • Enable smarter, more consistent delivery aligned to a long-term vision, from strategy through to execution

        And that’s where Accenture makes its mark, helping clients navigate the journey with confidence.

        Learn more about how Accenture is supporting SSEN Transmission on its digitisation journey with Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI and Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure

        • Digital Strategy
        • Infrastructure & Cloud
        • Sustainability Technology

        Satya Mishra, Director, Product Management at Amazon Business, discusses how CPOs have become an important voice at the table to drive digital transformation and efficient collaboration.

        Harnessing efficiency is at the heart of any digital transformation journey.

        Digitalisation should revolve around driving efficiency and achieving cost savings. Otherwise, why do it?

        Amazon is no stranger to simplifying shopping for its customers. It is why Amazon has become a global leader in e-commerce. But, business-to-business customers can have different needs than traditional consumers, which is what led to the birth of Amazon Business in 2015. Amazon Business simplifies procurement processes, and one of the key ways it does this is by integrating with third-party systems to drive efficiencies and quickly discover insights. 

        Satya Mishra, Director, Product Management at Amazon Business, tells us all about how the organisation is helping procurement leaders to integrate their systems to lead to time and money savings.

        Satya Mishra: “More than six million customers around the world tap Amazon Business to access business-only pricing and selection, purchasing system integrations, a curated site experience, Business Prime, single or multi-user business accounts, and dedicated customer support, among other benefits.

        “I lead Amazon Business’ integrations tech team, which builds integrations with third-party e-procurement, expense management, e-sourcing and idP systems. We also build APIs for our customers that either they or the third-party system integrators can use to create solutions that meet customers’ procurement needs. Integrations can allow business buyers to create connected buying journeys, which we call smart business buying journeys. 

        “If a customer does not have existing procurement systems they’d like to integrate, they can take advantage of other native tools, like a Business Analytics dashboard, in the Amazon Business store, so they can monitor their business spend. They can also discover and use some third-party integrated apps in the new Amazon Business App Center.”

        Why would a customer choose to integrate their systems? Are CPOs leading the way?

        Satya Mishra: “By integrating systems, customers can save time and money, drive compliance, spend visibility, and gain clearer insights. I talk to CPOs frequently to learn about their pain points. I often hear from these leaders that it can be tough for procurement teams to manage or create purchasing policies. This is especially if they have a high volume of purchases coming in from employees across their whole organisation, with a small group of employees, or even one employee, manually reviewing and reconciling. Integrations can automate these processes and help create a more intuitive buying experience across systems.

        “Procurement is a strategic business function. It’s data-driven and measurable. CPOs manage the business buying, and the business buying can directly impact an organisation’s bottom line. If procurement tools don’t automatically connect to a source of supply, business buying decisions can become more complex. Properly integrated technology systems can help solve these issues for procurement leaders.”

        Satya Mishra, Director, Product Management at Amazon Business

        Beyond process complexity, what other challenges are procurement leaders facing?

        Satya Mishra: “In the Amazon Business 2024 State of Procurement Report, other top challenges respondents reported were having access to a wide range of sellers and products that meet their needs, and ensuring compliance with spend policies. 

        “The report also found that 52% of procurement decision-makers are responsible for making purchases for multiple locations. Of that group, 57% make purchases for multiple countries.

        “During my conversations with CPOs, I hear them say that having access to millions of products across many categories through Amazon Business has allowed them to streamline their supplier quantity and reduced time spent going to physical stores or trying to find products they’re looking for from a range of online websites. They’ve also shared that the ability to ship purchases from Amazon Business to multiple addresses has been very helpful in reducing complexity for both spot-buy and planned or recurring purchases. Organisations may need to buy specific products, like copy paper or snacks, in a recurring way. They may need to buy something else, like desks, only once, and in bulk, at that. Amazon Business’ ordering capabilities are agile and can lessen the purchasing complexity.”

        How should procurement leaders choose which integrations will help them the most? 

        Satya Mishra: “At Amazon Business, we work backwards from customer problems to find solutions. I recommend CPOs think about what existing systems their employees may already use, the organisation’s buying needs, and their buyers’ typical purchasing behaviors. The buying experience should be intuitive and delightful. 

        “Amazon Business integrates with more than 300 systems, like Coupa, SAP Ariba, Okta, Fairmarkit, and Intuit Quickbooks, to name just a handful. With e-procurement integrations like Punchout and Integrated Search, customers start their buying journey in their e-procurement system. With Punch-in, they start on the Amazon Business website, then punch into their e-procurement system. With SSO, customers can use their existing employee credentials. Our collection of APIs can help customers customise their procure-to-pay and source-to-settle operations. This includes automating receipts in expense management systems and track progress toward spending goals. 

        “My team recently launched an App Center where customers can discover third-party apps spanning Accounting Management, Rewards & Recognition, Expense Management, Integrated Shopping and Inventory Management categories. We’ll continue to add more apps over time to help simplify the integrated app discovery process for customers.

        “Some customers choose to stack their integrations, while others stick with one integration that serves their needs. There are many possibilities, and you don’t just have to choose one integration. You can start with Punchout and e-invoicing, for example, and then also integrate with Integrated Search, so your buyers can search the Amazon Business catalog within the e-procurement system your organisation uses.”

        Are integrations tech projects?

        Satya Mishra: “No, integrations should not be viewed as tech projects to be decided by only an IT team. Integrations open doors to greater data connectivity and business efficiencies across organisations. Instead of having disjointed data streams, you can connect those systems and centralise data, increasing spend visibility. You may be able to spot patterns and identify cost savings that may have gotten lost otherwise. 

        “It’s not uncommon for me to hear that CPOs, CFOs and CIOs are collaborating on business decisions that will save them all time and meet shared goals, and integrations are in their mix of recommendations. 

        “One of my team’s key goals has been to simplify integrations and bring in more self-service solutions. In terms of set-up, some integrations like SSO can be self-serviced by the customer. Amazon Business can help customers with the set-up process for integrations as well.”

        How has procurement transformed in recent years?

        Satya Mishra: “Procurement is no longer viewed as a back-office function. CPOs more commonly have a seat at the table for strategic cross-functional decisions with CFOs and CIOs.

        “95% of Amazon Business 2024 State of Procurement Report respondents say the purchases they make mostly fall into managed spend. Managed spending is often planned for months or years ahead of time. This can create a great opportunity to recruit other stakeholders across departments versus outsourcing purchasing responsibilities. Equipping domain experts to support routine purchasing activities allows procurement to uplevel its focus and take on higher priorities across the organisation, while still maintaining oversight of overarching buying patterns. It’s also worth noting that by connecting to e-procurement and expense management systems, integrations provide easy and secure access to products on Amazon Business and help facilitate managed spend.”

        What does the future of procurement look like?

        Satya Mishra: “Bright! By embracing digital transformation and artificial intelligence to form more agile and strategic operations, CPOs can influence the ways their organisations innovate and adapt to change.”

        Read the latest CPOstrategy here!

        • Procurement Strategy

        Nigel Greatorex, Global Industry Manager at ABB, on how digital technologies can support decarbonisation and net zero goals

        Nigel Greatorex is the Global Industry Manager for Carbon Capture and Storage (CCS) at ABB Energy Industries. He explains how digital technologies can play a critical role in the transition to a low carbon world by enabling global emissions reductions. Furthermore, he highlights the role of CCS and how challenges can be overcome through digitalisation.

        Meeting our global decarbonisation goals is arguably the most pressing challenge facing humanity. Moreover, solving this requires concerted global action. However, there is no silver bullet to the global warming crisis. The solution requires a mix of investment, legislation and, importantly, innovative digital technologies.

        Decarbonisation digital technologies

        It’s widely recognised decarbonisation is essential to achieving net zero emissions by 2050. Decarbonisation technology is becoming an increasingly important, rapidly growing market. It is especially relevant for heavy industries – such as chemicals, cement and steel. These account for 70 percent of industrial CO2 emissions; equal to approximately six billion tons annually.

        CCS digital technologies are increasingly seen as key to helping industries decarbonise their operations. Reaching our net zero targets requires industry uptake of CCS to grow 120-fold by 2050, according to analysis from McKinsey & Company. Indeed, if successful, it could be responsible for reducing CO2 emissions from the industrial sector by 45 percent.

        A Digital Twin solution

        ABB and Pace CCS joined forces to deliver a digital twin solution. It reduces the cost of integrating CCS into new and existing industrial operations. Simulating the design stage and test scenarios to deliver proof of concept gives customers peace of mind. Indeed, system designs need to be fit for purpose. Also, it demonstrates the smooth transition into CCS operations. Additionally, the digital twin models the full value chain of a CCS system.

        Read the full story here

        • Sustainability Technology

        In early 2019, the Voluntary Health Insurance Scheme (VHIS) was introduced in Hong Kong by the Food and Health Bureau…

        In early 2019, the Voluntary Health Insurance Scheme (VHIS) was introduced in Hong Kong by the Food and Health Bureau to regulate indemnity hospital insurance plans offered to individuals, with voluntary participation by insurance companies and consumers. The VHIS was designed as a means of encouraging and supporting customers to purchase private healthcare services and for Koh Yi Mien, Managing Director Health and Employee Benefits at AXA Hong Kong, this scheme represents a broader transformation of healthcare and insurance services. “Currently, the demand on healthcare in Hong Kong in the public sector is incredibly high with very long waiting times and waiting lists,” she explains. “As a result, people just aren’t getting timely access to treatment. The private sector in Hong Kong, which is world-class, has capacity. So, if we can rebalance and shift some of the elective work from public to private, it will free up more people to use the public service in a timely fashion.”

        Yi Mien also points to a global drive for greater transparency, accountability, use of data and technology as well as promoting customer choice as key drivers of change in the insurance space. “It’s no longer a case of simply providing reimbursement to people when they need treatment,” she says. “It’s about being the patient’s partner throughout their whole life so that when they need healthcare, whenever and wherever they are, we are there to help and support them in their times of need.” 

        The modern-day insurance customer is very different from the customer of the past. We live in times of greater access to information through the advent of social media and the increasing influence of the Internet and this has resulted in insurance customers being more knowledgeable about their conditions and asking more questions of their doctors than ever before. As a result, the balance between the customer and the healthcare provider is becoming more equitable. “Customers and patients, as a result, are becoming more demanding,” says Yi Mien. “Gone are the traditional ideas that doctor knows best. It’s not uncommon for patients to see their doctor with a list of demands, while expecting to be serviced.”

        Running parallel to becoming more knowledgeable and demanding is the use of smartphones and how it has created a culture of service in an instant. When customers purchase etiquettes or use banking services, they expect the ability to be able to access and complete these transactions and services via their smartphone devices. Fewer and fewer people are accessing physical bank branches and the healthcare insurance sector, despite being still very traditional, is feeling the effects of this instant demand. “Healthcare is a very traditional sector sure, but asking patients or customers to book weeks in advance and telling them they don’t really have any choice is becoming increasingly unacceptable and so healthcare becomes a commodity,” says Mie Koh. “They, like any other customer, vote with their feet and want 24/7 access to quality healthcare without waiting directly from us as the insurer.”

        The informed customer and patient have also transformed the relationship between customer and doctor. It is no longer a bilateral relationship and the entire healthcare ecosystem works to provide services from prevention right through to treatment. The result? Insurers like AXA work with customers before they are sick and encourage them to maintain their health, but they also work with clients during their illness and even afterwards AXA will continue to treat them in their rehabilitation. “During their healthcare journey, customers want some handholding in order to navigate the very complex healthcare system, to make sure they get the right healthcare provider, doctor and hospitals that are best for them in their time of need,” says Yi Mien. “This can only happen if we are using digital so that it becomes more real time.”

        AXA has been embracing technology for a number of years to be able to serve and effectively work with its customers. It achieves this by starting with the definition of a product, because the product sets the rules. Yi Mien highlights that the rules would often be how AXA would spell out the terms and conditions, the provisions, but these rules also set the customer expectations. Throughout late 2018 and 2019, AXA has invested in digital to enable its customers to buy online, service online, claim online and check-up online. The company also launched a servicing app called Emma, a ‘digital companion’ that enables even faster service. Yi Mien describes this app as a true “health companion”. She is also keen to highlight that the technology is only part of the story. AXA has built a vast medical network with some of the leading hospitals and doctors and customers simply having to log into their companion app to be able to access this network at the touch of a button. “All they need to show is their digital card, their e-card, and with the QR code, the provider just scans it. All of the data is downloaded and all they need to do is sign, get their treatment, and then when they discharge, just sign that they have received the treatment and off they go,” she says. “The hospital will bill AXA directly so there’s no out of pocket. The data is also transmitted to AXA which means that we have more comprehensive and more reliable data.”

        Comprehensive and reliable data is crucial to the technology journey of AXA, but it is also integral to the customer journey. With a customer’s entire electronic medical records stored effectively and securely, as Yi Mien notes, why would they go anywhere else? The data that an insurer handles is often complex in nature, but this data is processed through artificial intelligence, with AI being used to process claims more effectively and interpret the information to allow AXA to create rules and algorithms to better serve its customers. AXA also utilises AI through its companion app Emma. “Emma is our chatbot,” explains Yi Mien. “Emma has been built up based on a multitude of Q&As that our customer services team have recorded and collected over many months and years. As we continue to build, and more people use Emma, then the quality of the responses she has in her arsenal will improve.” In the first two months of operations, Emma recorded an accuracy level of 50%. Yi Mien firmly believes that as more people engage with Emma and as a result, the chatbot will evolve and become more of a real-time navigator that can direct customers across the whole ecosystem.

        In the global discussion around AI, the topic of transparency is often a key point of debate. With governments around the world shining a spotlight on exactly what data is collected and how it is used, AXA ensures that it maintains an open and transparent dialogue with its customers. As customers engage with Emma and the companion app, they can at any time request their transcripts. Should they choose to speak with a human adviser, all calls are recorded and again they can access those recordings should they wish. Not only is this an example of AXA complying with global governing laws, it also highlights that the customer is at the very heart of every decision it makes and it maintains this as it continues to implement new technologies. “If you look at banking as an example, we all are so used to accessing our bank accounts at any time, be it through our phones or online,” says Yi Mien. “If we want to speak to someone, we can. If we want to go into a branch, we can. I believe this is the way to go with insurance as well. We make it easy for our customers to contact us. We are doing everything we can to allow that.”

        “Healthcare is quite personal, so we are doing what we can to allow customers to speak to people, should they not wish to use our chatbot. These are very personal journeys and digital is still in its early days, so we really have to provide different avenues and channels for our customers to contact us.”

        As Yi Mien notes, AXA designs its customer journey by starting at the product and going through all the way to treatment. The company makes every decision with the customer’s perspective in mind. As a doctor by trade, Yi Mien sees that all new products are designed by doctors because they understand how the patients move throughout the whole healthcare ecosystem. When AXA designs new products, it does not operate within a vacuum. It has a customer insight group, where around 1,000 customers operate as a real-time focus group in which AXA can test its products with. “When I think about future products, we will test with this group of people and get feedback to see whether we are aligned with the current customer need. So, it’s not just technology per se, but actually meets a customer’s needs,” she says. “One other area to make sure that we are doing the right thing, because technology also costs money, is to make sure that we are very robust in what we do. AXA is unique in that we sell life insurance, health insurance, employee benefits, and we also have P&C. So, being a multi-line insurer, we have the opportunity of having one approach and cross-selling across the business lines, which is a fantastic opportunity. We can only do that through technology.”

        Over the course of her career, Yi Mien has been a champion of the transformative effect of technology in becoming a greater enabler for healthcare and healthcare insurance providers around the world. One area in particular that is close to her heart is the mental health space. In Hong Kong, the waiting time to see a psychologist is close to two years and if patients were to seek private care, it is an expensive solution. “Look at a country like Hong Kong, or Australia, they are so vast that there just aren’t enough practitioners to cover the breadth of the geography. Digital is the solution,” she says. “Digital enables people to seek, support and care at the time that is most convenient for them.”

        “In the past two to three years, there has been a proliferation of digital tools. Recent studies have shown that digital tools are as good as, if not better, than in-person therapy because customers prefer to talk to a robot rather than face-to-face because they feel that the robot is not judging them.”

        Another example that Yi Mien highlights is in the UK, where a VR program has been developed by programmers that is therapy through gameification. The treatment is consistent every time and because of its mobile platform, it is accessible. “We can provide it where you work,” she says. “That’s just one example as to how we can destigmatise mental health through technology.”

        AXA operates within a broad healthcare ecosystem, an ecosystem made up of partners, providers and doctors and Yi Mien stresses that in the future of insurance, it will be impossible for insurers to control the ecosystem. “I don’t foresee a future where that happens,” she says. “Partnerships are incredibly important. Things are moving so fast there’s no way we can catch up alone. We need to have partners, collaborators, who are working together to ensure we are at the top of our game and at the forefront of innovation.”

        “Over the course of our lives, so many different things can happen and so people will need better care and support. By having a collection of data that represents our customer’s needs we are able to push or suggest services that better meet those needs. In order for us to do that, we need to have players collaborate in the ecosystem. It’s imperative.”

        As AXA continues this digital growth journey, the next few years will be defined by improving the agility of the digital companion in order to improve the interaction with customers. AXA will also be looking at developing a digital marketplace in which customers can go shopping within an AXA owned digital platform. For Yi Mien, though, the future is clear for AXA and in order to be successful, she feels it’s down to one thing. “AXA has a clear digital strategy for sure, where it will transform its digital system and build new IT infrastructure to transform the customer experience,” she says. “But the technology is only one part of the story.”

        “Unless we can transform the customer experience to deliver a service they truly value, then technology doesn’t do anything. It’s important to recognise that technology is enabling us to transform healthcare, to make it easier, faster, and cheaper for people to receive care. That means in the long-term, sustainable healthcare and health services, which fits into sustainable insurance.”